2025-06-01 22:04:49.291931 | Job console starting 2025-06-01 22:04:49.307418 | Updating git repos 2025-06-01 22:04:49.903731 | Cloning repos into workspace 2025-06-01 22:04:50.103229 | Restoring repo states 2025-06-01 22:04:50.129938 | Merging changes 2025-06-01 22:04:50.129964 | Checking out repos 2025-06-01 22:04:50.362211 | Preparing playbooks 2025-06-01 22:04:51.072408 | Running Ansible setup 2025-06-01 22:04:55.390511 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-01 22:04:56.188829 | 2025-06-01 22:04:56.189013 | PLAY [Base pre] 2025-06-01 22:04:56.213584 | 2025-06-01 22:04:56.213756 | TASK [Setup log path fact] 2025-06-01 22:04:56.234544 | orchestrator | ok 2025-06-01 22:04:56.258567 | 2025-06-01 22:04:56.258742 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-01 22:04:56.299691 | orchestrator | ok 2025-06-01 22:04:56.311667 | 2025-06-01 22:04:56.311792 | TASK [emit-job-header : Print job information] 2025-06-01 22:04:56.358663 | # Job Information 2025-06-01 22:04:56.359017 | Ansible Version: 2.16.14 2025-06-01 22:04:56.359081 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-01 22:04:56.359137 | Pipeline: post 2025-06-01 22:04:56.359176 | Executor: 521e9411259a 2025-06-01 22:04:56.359207 | Triggered by: https://github.com/osism/testbed/commit/d6099138e48b52987bb07a725af029effb071be4 2025-06-01 22:04:56.359240 | Event ID: 94ba7f50-3f23-11f0-9e21-56db86a6d01c 2025-06-01 22:04:56.369665 | 2025-06-01 22:04:56.369828 | LOOP [emit-job-header : Print node information] 2025-06-01 22:04:56.509597 | orchestrator | ok: 2025-06-01 22:04:56.509983 | orchestrator | # Node Information 2025-06-01 22:04:56.510057 | orchestrator | Inventory Hostname: orchestrator 2025-06-01 22:04:56.510111 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-01 22:04:56.510152 | orchestrator | Username: zuul-testbed06 2025-06-01 22:04:56.510189 | orchestrator | Distro: Debian 12.11 2025-06-01 22:04:56.510231 | orchestrator | Provider: static-testbed 2025-06-01 22:04:56.510271 | orchestrator | Region: 2025-06-01 22:04:56.510309 | orchestrator | Label: testbed-orchestrator 2025-06-01 22:04:56.510345 | orchestrator | Product Name: OpenStack Nova 2025-06-01 22:04:56.510379 | orchestrator | Interface IP: 81.163.193.140 2025-06-01 22:04:56.532508 | 2025-06-01 22:04:56.532675 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-01 22:04:57.014875 | orchestrator -> localhost | changed 2025-06-01 22:04:57.027130 | 2025-06-01 22:04:57.027327 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-01 22:04:58.358928 | orchestrator -> localhost | changed 2025-06-01 22:04:58.382444 | 2025-06-01 22:04:58.382743 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-01 22:04:58.671861 | orchestrator -> localhost | ok 2025-06-01 22:04:58.679417 | 2025-06-01 22:04:58.679538 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-01 22:04:58.710073 | orchestrator | ok 2025-06-01 22:04:58.727052 | orchestrator | included: /var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-01 22:04:58.735286 | 2025-06-01 22:04:58.735398 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-01 22:05:00.427503 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-01 22:05:00.427901 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/work/3bfa790e389247fa8ffbfba5f0ea409c_id_rsa 2025-06-01 22:05:00.427954 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/work/3bfa790e389247fa8ffbfba5f0ea409c_id_rsa.pub 2025-06-01 22:05:00.427981 | orchestrator -> localhost | The key fingerprint is: 2025-06-01 22:05:00.428009 | orchestrator -> localhost | SHA256:aQ+FkIJF5NfX0djcKn2HDXU2A+oDFx2YtZCIJA5oGRw zuul-build-sshkey 2025-06-01 22:05:00.428032 | orchestrator -> localhost | The key's randomart image is: 2025-06-01 22:05:00.428070 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-01 22:05:00.428092 | orchestrator -> localhost | |.E=*+.oo. .oBOo=+| 2025-06-01 22:05:00.428113 | orchestrator -> localhost | | =..+ o+ o ==o*.=| 2025-06-01 22:05:00.428133 | orchestrator -> localhost | |. .o. o.oo.o = | 2025-06-01 22:05:00.428152 | orchestrator -> localhost | | . ++ . + +| 2025-06-01 22:05:00.428171 | orchestrator -> localhost | | S o . ..| 2025-06-01 22:05:00.428200 | orchestrator -> localhost | | . o . | 2025-06-01 22:05:00.428221 | orchestrator -> localhost | | . | 2025-06-01 22:05:00.428240 | orchestrator -> localhost | | | 2025-06-01 22:05:00.428260 | orchestrator -> localhost | | | 2025-06-01 22:05:00.428281 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-01 22:05:00.428352 | orchestrator -> localhost | ok: Runtime: 0:00:01.185092 2025-06-01 22:05:00.436448 | 2025-06-01 22:05:00.436592 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-01 22:05:00.468587 | orchestrator | ok 2025-06-01 22:05:00.479817 | orchestrator | included: /var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-01 22:05:00.489492 | 2025-06-01 22:05:00.489628 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-01 22:05:00.513637 | orchestrator | skipping: Conditional result was False 2025-06-01 22:05:00.522857 | 2025-06-01 22:05:00.523010 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-01 22:05:01.144176 | orchestrator | changed 2025-06-01 22:05:01.150768 | 2025-06-01 22:05:01.151515 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-01 22:05:01.433629 | orchestrator | ok 2025-06-01 22:05:01.442974 | 2025-06-01 22:05:01.443173 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-01 22:05:01.880312 | orchestrator | ok 2025-06-01 22:05:01.888654 | 2025-06-01 22:05:01.888856 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-01 22:05:02.325837 | orchestrator | ok 2025-06-01 22:05:02.335324 | 2025-06-01 22:05:02.335481 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-01 22:05:02.369993 | orchestrator | skipping: Conditional result was False 2025-06-01 22:05:02.383495 | 2025-06-01 22:05:02.383819 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-01 22:05:02.867391 | orchestrator -> localhost | changed 2025-06-01 22:05:02.893591 | 2025-06-01 22:05:02.893775 | TASK [add-build-sshkey : Add back temp key] 2025-06-01 22:05:03.250197 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/work/3bfa790e389247fa8ffbfba5f0ea409c_id_rsa (zuul-build-sshkey) 2025-06-01 22:05:03.250943 | orchestrator -> localhost | ok: Runtime: 0:00:00.018643 2025-06-01 22:05:03.266570 | 2025-06-01 22:05:03.266805 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-01 22:05:03.689282 | orchestrator | ok 2025-06-01 22:05:03.698059 | 2025-06-01 22:05:03.698197 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-01 22:05:03.732921 | orchestrator | skipping: Conditional result was False 2025-06-01 22:05:03.795652 | 2025-06-01 22:05:03.795831 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-01 22:05:04.210608 | orchestrator | ok 2025-06-01 22:05:04.240410 | 2025-06-01 22:05:04.240643 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-01 22:05:04.295252 | orchestrator | ok 2025-06-01 22:05:04.305830 | 2025-06-01 22:05:04.306100 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-01 22:05:04.656052 | orchestrator -> localhost | ok 2025-06-01 22:05:04.663804 | 2025-06-01 22:05:04.663928 | TASK [validate-host : Collect information about the host] 2025-06-01 22:05:05.920462 | orchestrator | ok 2025-06-01 22:05:05.936416 | 2025-06-01 22:05:05.936567 | TASK [validate-host : Sanitize hostname] 2025-06-01 22:05:06.019915 | orchestrator | ok 2025-06-01 22:05:06.031630 | 2025-06-01 22:05:06.031826 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-01 22:05:06.595661 | orchestrator -> localhost | changed 2025-06-01 22:05:06.610669 | 2025-06-01 22:05:06.610965 | TASK [validate-host : Collect information about zuul worker] 2025-06-01 22:05:07.044528 | orchestrator | ok 2025-06-01 22:05:07.053235 | 2025-06-01 22:05:07.053406 | TASK [validate-host : Write out all zuul information for each host] 2025-06-01 22:05:07.626967 | orchestrator -> localhost | changed 2025-06-01 22:05:07.644464 | 2025-06-01 22:05:07.644599 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-01 22:05:07.921140 | orchestrator | ok 2025-06-01 22:05:07.928049 | 2025-06-01 22:05:07.928167 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-01 22:05:35.579664 | orchestrator | changed: 2025-06-01 22:05:35.579991 | orchestrator | .d..t...... src/ 2025-06-01 22:05:35.580043 | orchestrator | .d..t...... src/github.com/ 2025-06-01 22:05:35.580080 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-01 22:05:35.580123 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-01 22:05:35.580158 | orchestrator | RedHat.yml 2025-06-01 22:05:35.593123 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-01 22:05:35.593141 | orchestrator | RedHat.yml 2025-06-01 22:05:35.593192 | orchestrator | = 1.53.0"... 2025-06-01 22:05:49.333674 | orchestrator | 22:05:49.333 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-01 22:05:50.412679 | orchestrator | 22:05:50.412 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-01 22:05:51.254521 | orchestrator | 22:05:51.254 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-01 22:05:52.438347 | orchestrator | 22:05:52.438 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-01 22:05:53.322281 | orchestrator | 22:05:53.322 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-01 22:05:54.647051 | orchestrator | 22:05:54.646 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-01 22:05:55.944245 | orchestrator | 22:05:55.943 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-01 22:05:55.944314 | orchestrator | 22:05:55.944 STDOUT terraform: Providers are signed by their developers. 2025-06-01 22:05:55.944322 | orchestrator | 22:05:55.944 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-01 22:05:55.944328 | orchestrator | 22:05:55.944 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-01 22:05:55.944333 | orchestrator | 22:05:55.944 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-01 22:05:55.944373 | orchestrator | 22:05:55.944 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-01 22:05:55.944432 | orchestrator | 22:05:55.944 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-01 22:05:55.944441 | orchestrator | 22:05:55.944 STDOUT terraform: you run "tofu init" in the future. 2025-06-01 22:05:55.944980 | orchestrator | 22:05:55.944 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-01 22:05:55.945043 | orchestrator | 22:05:55.944 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-01 22:05:55.945069 | orchestrator | 22:05:55.945 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-01 22:05:55.945077 | orchestrator | 22:05:55.945 STDOUT terraform: should now work. 2025-06-01 22:05:55.945163 | orchestrator | 22:05:55.945 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-01 22:05:55.945225 | orchestrator | 22:05:55.945 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-01 22:05:55.945271 | orchestrator | 22:05:55.945 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-01 22:05:56.116887 | orchestrator | 22:05:56.116 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-01 22:05:56.307039 | orchestrator | 22:05:56.306 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-01 22:05:56.307132 | orchestrator | 22:05:56.306 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-01 22:05:56.307142 | orchestrator | 22:05:56.306 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-01 22:05:56.307147 | orchestrator | 22:05:56.306 STDOUT terraform: for this configuration. 2025-06-01 22:05:56.520844 | orchestrator | 22:05:56.520 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-01 22:05:56.634861 | orchestrator | 22:05:56.634 STDOUT terraform: ci.auto.tfvars 2025-06-01 22:05:56.639193 | orchestrator | 22:05:56.639 STDOUT terraform: default_custom.tf 2025-06-01 22:05:56.873846 | orchestrator | 22:05:56.873 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-01 22:05:57.881386 | orchestrator | 22:05:57.880 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-01 22:05:58.431861 | orchestrator | 22:05:58.431 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-01 22:05:58.660284 | orchestrator | 22:05:58.659 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-01 22:05:58.660374 | orchestrator | 22:05:58.660 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-01 22:05:58.660453 | orchestrator | 22:05:58.660 STDOUT terraform:  + create 2025-06-01 22:05:58.660506 | orchestrator | 22:05:58.660 STDOUT terraform:  <= read (data resources) 2025-06-01 22:05:58.660562 | orchestrator | 22:05:58.660 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-01 22:05:58.661186 | orchestrator | 22:05:58.660 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-01 22:05:58.661251 | orchestrator | 22:05:58.660 STDOUT terraform:  # (config refers to values not yet known) 2025-06-01 22:05:58.661258 | orchestrator | 22:05:58.660 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-01 22:05:58.661263 | orchestrator | 22:05:58.660 STDOUT terraform:  + checksum = (known after apply) 2025-06-01 22:05:58.661267 | orchestrator | 22:05:58.660 STDOUT terraform:  + created_at = (known after apply) 2025-06-01 22:05:58.661272 | orchestrator | 22:05:58.660 STDOUT terraform:  + file = (known after apply) 2025-06-01 22:05:58.661276 | orchestrator | 22:05:58.660 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.661280 | orchestrator | 22:05:58.660 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.661284 | orchestrator | 22:05:58.660 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-01 22:05:58.661288 | orchestrator | 22:05:58.661 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-01 22:05:58.661292 | orchestrator | 22:05:58.661 STDOUT terraform:  + most_recent = true 2025-06-01 22:05:58.661313 | orchestrator | 22:05:58.661 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.661317 | orchestrator | 22:05:58.661 STDOUT terraform:  + protected = (known after apply) 2025-06-01 22:05:58.661321 | orchestrator | 22:05:58.661 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.661331 | orchestrator | 22:05:58.661 STDOUT terraform:  + schema = (known after apply) 2025-06-01 22:05:58.661335 | orchestrator | 22:05:58.661 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-01 22:05:58.661339 | orchestrator | 22:05:58.661 STDOUT terraform:  + tags = (known after apply) 2025-06-01 22:05:58.661343 | orchestrator | 22:05:58.661 STDOUT terraform:  + updated_at = (known after apply) 2025-06-01 22:05:58.661348 | orchestrator | 22:05:58.661 STDOUT terraform:  } 2025-06-01 22:05:58.661419 | orchestrator | 22:05:58.661 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-01 22:05:58.661453 | orchestrator | 22:05:58.661 STDOUT terraform:  # (config refers to values not yet known) 2025-06-01 22:05:58.661492 | orchestrator | 22:05:58.661 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-01 22:05:58.661532 | orchestrator | 22:05:58.661 STDOUT terraform:  + checksum = (known after apply) 2025-06-01 22:05:58.661545 | orchestrator | 22:05:58.661 STDOUT terraform:  + created_at = (known after apply) 2025-06-01 22:05:58.661578 | orchestrator | 22:05:58.661 STDOUT terraform:  + file = (known after apply) 2025-06-01 22:05:58.661619 | orchestrator | 22:05:58.661 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.661638 | orchestrator | 22:05:58.661 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.661669 | orchestrator | 22:05:58.661 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-01 22:05:58.661703 | orchestrator | 22:05:58.661 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-01 22:05:58.661710 | orchestrator | 22:05:58.661 STDOUT terraform:  + most_recent = true 2025-06-01 22:05:58.661745 | orchestrator | 22:05:58.661 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.661789 | orchestrator | 22:05:58.661 STDOUT terraform:  + protected = (known after apply) 2025-06-01 22:05:58.661795 | orchestrator | 22:05:58.661 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.661827 | orchestrator | 22:05:58.661 STDOUT terraform:  + schema = (known after apply) 2025-06-01 22:05:58.661879 | orchestrator | 22:05:58.661 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-01 22:05:58.661899 | orchestrator | 22:05:58.661 STDOUT terraform:  + tags = (known after apply) 2025-06-01 22:05:58.661925 | orchestrator | 22:05:58.661 STDOUT terraform:  + updated_at = (known after apply) 2025-06-01 22:05:58.661959 | orchestrator | 22:05:58.661 STDOUT terraform:  } 2025-06-01 22:05:58.662151 | orchestrator | 22:05:58.662 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-01 22:05:58.662169 | orchestrator | 22:05:58.662 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-01 22:05:58.662221 | orchestrator | 22:05:58.662 STDOUT terraform:  + content = (known after apply) 2025-06-01 22:05:58.662250 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:05:58.662286 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:05:58.662323 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:05:58.662359 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:05:58.662398 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:05:58.662434 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:05:58.662472 | orchestrator | 22:05:58.662 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 22:05:58.662491 | orchestrator | 22:05:58.662 STDOUT terraform:  + file_permission = "0644" 2025-06-01 22:05:58.662526 | orchestrator | 22:05:58.662 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-01 22:05:58.662564 | orchestrator | 22:05:58.662 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.662571 | orchestrator | 22:05:58.662 STDOUT terraform:  } 2025-06-01 22:05:58.662698 | orchestrator | 22:05:58.662 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-01 22:05:58.662738 | orchestrator | 22:05:58.662 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-01 22:05:58.662763 | orchestrator | 22:05:58.662 STDOUT terraform:  + content = (known after apply) 2025-06-01 22:05:58.662809 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:05:58.662840 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:05:58.662877 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:05:58.662914 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:05:58.662952 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:05:58.662993 | orchestrator | 22:05:58.662 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:05:58.663019 | orchestrator | 22:05:58.662 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 22:05:58.663059 | orchestrator | 22:05:58.663 STDOUT terraform:  + file_permission = "0644" 2025-06-01 22:05:58.663104 | orchestrator | 22:05:58.663 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-01 22:05:58.663143 | orchestrator | 22:05:58.663 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.663160 | orchestrator | 22:05:58.663 STDOUT terraform:  } 2025-06-01 22:05:58.663284 | orchestrator | 22:05:58.663 STDOUT terraform:  # local_file.inventory will be created 2025-06-01 22:05:58.663311 | orchestrator | 22:05:58.663 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-01 22:05:58.663362 | orchestrator | 22:05:58.663 STDOUT terraform:  + content = (known after apply) 2025-06-01 22:05:58.663405 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:05:58.663434 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:05:58.663471 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:05:58.663514 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:05:58.663595 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:05:58.663633 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:05:58.663663 | orchestrator | 22:05:58.663 STDOUT terraform:  + directory_permission = "0777" 2025-06-01 22:05:58.663688 | orchestrator | 22:05:58.663 STDOUT terraform:  + file_permission = "0644" 2025-06-01 22:05:58.663733 | orchestrator | 22:05:58.663 STDOUT terraform:  + filename = "inventory.ci" 2025-06-01 22:05:58.663759 | orchestrator | 22:05:58.663 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.663775 | orchestrator | 22:05:58.663 STDOUT terraform:  } 2025-06-01 22:05:58.663899 | orchestrator | 22:05:58.663 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-01 22:05:58.663932 | orchestrator | 22:05:58.663 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-01 22:05:58.663968 | orchestrator | 22:05:58.663 STDOUT terraform:  + content = (sensitive value) 2025-06-01 22:05:58.664010 | orchestrator | 22:05:58.663 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-01 22:05:58.664051 | orchestrator | 22:05:58.664 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-01 22:05:58.664116 | orchestrator | 22:05:58.664 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-01 22:05:58.664354 | orchestrator | 22:05:58.664 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-01 22:05:58.664430 | orchestrator | 22:05:58.664 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-01 22:05:58.664448 | orchestrator | 22:05:58.664 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-01 22:05:58.664461 | orchestrator | 22:05:58.664 STDOUT terraform:  + directory_permission = "0700" 2025-06-01 22:05:58.664472 | orchestrator | 22:05:58.664 STDOUT terraform:  + file_permission = "0600" 2025-06-01 22:05:58.664494 | orchestrator | 22:05:58.664 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-01 22:05:58.664506 | orchestrator | 22:05:58.664 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.664517 | orchestrator | 22:05:58.664 STDOUT terraform:  } 2025-06-01 22:05:58.664529 | orchestrator | 22:05:58.664 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-01 22:05:58.664540 | orchestrator | 22:05:58.664 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-01 22:05:58.664551 | orchestrator | 22:05:58.664 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.664562 | orchestrator | 22:05:58.664 STDOUT terraform:  } 2025-06-01 22:05:58.664615 | orchestrator | 22:05:58.664 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-01 22:05:58.664698 | orchestrator | 22:05:58.664 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-01 22:05:58.664713 | orchestrator | 22:05:58.664 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.664770 | orchestrator | 22:05:58.664 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.664783 | orchestrator | 22:05:58.664 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.664798 | orchestrator | 22:05:58.664 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.664846 | orchestrator | 22:05:58.664 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.664863 | orchestrator | 22:05:58.664 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-01 22:05:58.664998 | orchestrator | 22:05:58.664 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.665017 | orchestrator | 22:05:58.664 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.665023 | orchestrator | 22:05:58.664 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.665029 | orchestrator | 22:05:58.664 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.665034 | orchestrator | 22:05:58.664 STDOUT terraform:  } 2025-06-01 22:05:58.665141 | orchestrator | 22:05:58.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-01 22:05:58.665188 | orchestrator | 22:05:58.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:05:58.665232 | orchestrator | 22:05:58.665 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.665257 | orchestrator | 22:05:58.665 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.665296 | orchestrator | 22:05:58.665 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.665333 | orchestrator | 22:05:58.665 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.665375 | orchestrator | 22:05:58.665 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.665442 | orchestrator | 22:05:58.665 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-01 22:05:58.665472 | orchestrator | 22:05:58.665 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.665491 | orchestrator | 22:05:58.665 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.665515 | orchestrator | 22:05:58.665 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.665540 | orchestrator | 22:05:58.665 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.665547 | orchestrator | 22:05:58.665 STDOUT terraform:  } 2025-06-01 22:05:58.665685 | orchestrator | 22:05:58.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-01 22:05:58.665743 | orchestrator | 22:05:58.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:05:58.665792 | orchestrator | 22:05:58.665 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.665799 | orchestrator | 22:05:58.665 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.665843 | orchestrator | 22:05:58.665 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.665880 | orchestrator | 22:05:58.665 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.665914 | orchestrator | 22:05:58.665 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.665959 | orchestrator | 22:05:58.665 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-01 22:05:58.666004 | orchestrator | 22:05:58.665 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.666045 | orchestrator | 22:05:58.666 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.666069 | orchestrator | 22:05:58.666 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.666112 | orchestrator | 22:05:58.666 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.666117 | orchestrator | 22:05:58.666 STDOUT terraform:  } 2025-06-01 22:05:58.666258 | orchestrator | 22:05:58.666 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-01 22:05:58.666307 | orchestrator | 22:05:58.666 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:05:58.666343 | orchestrator | 22:05:58.666 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.666372 | orchestrator | 22:05:58.666 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.666406 | orchestrator | 22:05:58.666 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.666450 | orchestrator | 22:05:58.666 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.666487 | orchestrator | 22:05:58.666 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.666531 | orchestrator | 22:05:58.666 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-01 22:05:58.666568 | orchestrator | 22:05:58.666 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.666606 | orchestrator | 22:05:58.666 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.666630 | orchestrator | 22:05:58.666 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.666655 | orchestrator | 22:05:58.666 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.666670 | orchestrator | 22:05:58.666 STDOUT terraform:  } 2025-06-01 22:05:58.666799 | orchestrator | 22:05:58.666 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-01 22:05:58.666847 | orchestrator | 22:05:58.666 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:05:58.666882 | orchestrator | 22:05:58.666 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.666899 | orchestrator | 22:05:58.666 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.666939 | orchestrator | 22:05:58.666 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.666978 | orchestrator | 22:05:58.666 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.667016 | orchestrator | 22:05:58.666 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.667060 | orchestrator | 22:05:58.667 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-01 22:05:58.667110 | orchestrator | 22:05:58.667 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.667123 | orchestrator | 22:05:58.667 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.667163 | orchestrator | 22:05:58.667 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.667170 | orchestrator | 22:05:58.667 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.667187 | orchestrator | 22:05:58.667 STDOUT terraform:  } 2025-06-01 22:05:58.667348 | orchestrator | 22:05:58.667 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-01 22:05:58.667416 | orchestrator | 22:05:58.667 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:05:58.667463 | orchestrator | 22:05:58.667 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.667480 | orchestrator | 22:05:58.667 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.667517 | orchestrator | 22:05:58.667 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.667553 | orchestrator | 22:05:58.667 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.667592 | orchestrator | 22:05:58.667 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.667641 | orchestrator | 22:05:58.667 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-01 22:05:58.667675 | orchestrator | 22:05:58.667 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.667704 | orchestrator | 22:05:58.667 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.667739 | orchestrator | 22:05:58.667 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.667771 | orchestrator | 22:05:58.667 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.667776 | orchestrator | 22:05:58.667 STDOUT terraform:  } 2025-06-01 22:05:58.667903 | orchestrator | 22:05:58.667 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-01 22:05:58.667949 | orchestrator | 22:05:58.667 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-01 22:05:58.667999 | orchestrator | 22:05:58.667 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.668021 | orchestrator | 22:05:58.667 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.668053 | orchestrator | 22:05:58.668 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.668108 | orchestrator | 22:05:58.668 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.668131 | orchestrator | 22:05:58.668 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.668191 | orchestrator | 22:05:58.668 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-01 22:05:58.668219 | orchestrator | 22:05:58.668 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.668240 | orchestrator | 22:05:58.668 STDOUT terraform:  + size = 80 2025-06-01 22:05:58.668271 | orchestrator | 22:05:58.668 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.668290 | orchestrator | 22:05:58.668 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.668307 | orchestrator | 22:05:58.668 STDOUT terraform:  } 2025-06-01 22:05:58.669012 | orchestrator | 22:05:58.668 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-01 22:05:58.669118 | orchestrator | 22:05:58.669 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.669193 | orchestrator | 22:05:58.669 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.669250 | orchestrator | 22:05:58.669 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.669315 | orchestrator | 22:05:58.669 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.669383 | orchestrator | 22:05:58.669 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.669434 | orchestrator | 22:05:58.669 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-01 22:05:58.669506 | orchestrator | 22:05:58.669 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.669556 | orchestrator | 22:05:58.669 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.669589 | orchestrator | 22:05:58.669 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.669638 | orchestrator | 22:05:58.669 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.669662 | orchestrator | 22:05:58.669 STDOUT terraform:  } 2025-06-01 22:05:58.669734 | orchestrator | 22:05:58.669 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-01 22:05:58.669819 | orchestrator | 22:05:58.669 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.669881 | orchestrator | 22:05:58.669 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.669922 | orchestrator | 22:05:58.669 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.669982 | orchestrator | 22:05:58.669 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.670133 | orchestrator | 22:05:58.669 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.670202 | orchestrator | 22:05:58.670 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-01 22:05:58.670255 | orchestrator | 22:05:58.670 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.670303 | orchestrator | 22:05:58.670 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.670358 | orchestrator | 22:05:58.670 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.670393 | orchestrator | 22:05:58.670 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.670433 | orchestrator | 22:05:58.670 STDOUT terraform:  } 2025-06-01 22:05:58.670503 | orchestrator | 22:05:58.670 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-01 22:05:58.670594 | orchestrator | 22:05:58.670 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.670656 | orchestrator | 22:05:58.670 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.670727 | orchestrator | 22:05:58.670 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.670794 | orchestrator | 22:05:58.670 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.670878 | orchestrator | 22:05:58.670 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.670963 | orchestrator | 22:05:58.670 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-01 22:05:58.671029 | orchestrator | 22:05:58.670 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.671059 | orchestrator | 22:05:58.671 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.671123 | orchestrator | 22:05:58.671 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.671157 | orchestrator | 22:05:58.671 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.671206 | orchestrator | 22:05:58.671 STDOUT terraform:  } 2025-06-01 22:05:58.671283 | orchestrator | 22:05:58.671 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-01 22:05:58.671361 | orchestrator | 22:05:58.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.671419 | orchestrator | 22:05:58.671 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.671454 | orchestrator | 22:05:58.671 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.671523 | orchestrator | 22:05:58.671 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.671601 | orchestrator | 22:05:58.671 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.671664 | orchestrator | 22:05:58.671 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-01 22:05:58.671730 | orchestrator | 22:05:58.671 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.671762 | orchestrator | 22:05:58.671 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.671813 | orchestrator | 22:05:58.671 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.671870 | orchestrator | 22:05:58.671 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.671898 | orchestrator | 22:05:58.671 STDOUT terraform:  } 2025-06-01 22:05:58.671975 | orchestrator | 22:05:58.671 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-01 22:05:58.672052 | orchestrator | 22:05:58.671 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.672173 | orchestrator | 22:05:58.672 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.672230 | orchestrator | 22:05:58.672 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.672291 | orchestrator | 22:05:58.672 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.672339 | orchestrator | 22:05:58.672 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.672412 | orchestrator | 22:05:58.672 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-01 22:05:58.672476 | orchestrator | 22:05:58.672 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.672524 | orchestrator | 22:05:58.672 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.672564 | orchestrator | 22:05:58.672 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.672615 | orchestrator | 22:05:58.672 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.672648 | orchestrator | 22:05:58.672 STDOUT terraform:  } 2025-06-01 22:05:58.672720 | orchestrator | 22:05:58.672 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-01 22:05:58.672800 | orchestrator | 22:05:58.672 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.672873 | orchestrator | 22:05:58.672 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.672921 | orchestrator | 22:05:58.672 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.672973 | orchestrator | 22:05:58.672 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.673017 | orchestrator | 22:05:58.672 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.673138 | orchestrator | 22:05:58.673 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-01 22:05:58.673208 | orchestrator | 22:05:58.673 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.673244 | orchestrator | 22:05:58.673 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.673316 | orchestrator | 22:05:58.673 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.673354 | orchestrator | 22:05:58.673 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.673377 | orchestrator | 22:05:58.673 STDOUT terraform:  } 2025-06-01 22:05:58.673434 | orchestrator | 22:05:58.673 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-01 22:05:58.673487 | orchestrator | 22:05:58.673 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.673532 | orchestrator | 22:05:58.673 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.673564 | orchestrator | 22:05:58.673 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.673617 | orchestrator | 22:05:58.673 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.673677 | orchestrator | 22:05:58.673 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.673724 | orchestrator | 22:05:58.673 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-01 22:05:58.673767 | orchestrator | 22:05:58.673 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.673816 | orchestrator | 22:05:58.673 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.673869 | orchestrator | 22:05:58.673 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.673919 | orchestrator | 22:05:58.673 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.673960 | orchestrator | 22:05:58.673 STDOUT terraform:  } 2025-06-01 22:05:58.674056 | orchestrator | 22:05:58.673 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-01 22:05:58.674180 | orchestrator | 22:05:58.674 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.674254 | orchestrator | 22:05:58.674 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.674309 | orchestrator | 22:05:58.674 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.674398 | orchestrator | 22:05:58.674 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.674486 | orchestrator | 22:05:58.674 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.674557 | orchestrator | 22:05:58.674 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-01 22:05:58.674622 | orchestrator | 22:05:58.674 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.674678 | orchestrator | 22:05:58.674 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.674730 | orchestrator | 22:05:58.674 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.674777 | orchestrator | 22:05:58.674 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.674810 | orchestrator | 22:05:58.674 STDOUT terraform:  } 2025-06-01 22:05:58.674900 | orchestrator | 22:05:58.674 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-01 22:05:58.675000 | orchestrator | 22:05:58.674 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-01 22:05:58.675080 | orchestrator | 22:05:58.675 STDOUT terraform:  + attachment = (known after apply) 2025-06-01 22:05:58.675152 | orchestrator | 22:05:58.675 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.675235 | orchestrator | 22:05:58.675 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.675290 | orchestrator | 22:05:58.675 STDOUT terraform:  + metadata = (known after apply) 2025-06-01 22:05:58.675341 | orchestrator | 22:05:58.675 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-01 22:05:58.675389 | orchestrator | 22:05:58.675 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.675421 | orchestrator | 22:05:58.675 STDOUT terraform:  + size = 20 2025-06-01 22:05:58.675457 | orchestrator | 22:05:58.675 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-01 22:05:58.675491 | orchestrator | 22:05:58.675 STDOUT terraform:  + volume_type = "ssd" 2025-06-01 22:05:58.675514 | orchestrator | 22:05:58.675 STDOUT terraform:  } 2025-06-01 22:05:58.675665 | orchestrator | 22:05:58.675 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-01 22:05:58.675745 | orchestrator | 22:05:58.675 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-01 22:05:58.675792 | orchestrator | 22:05:58.675 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.675838 | orchestrator | 22:05:58.675 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.675882 | orchestrator | 22:05:58.675 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.675930 | orchestrator | 22:05:58.675 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.675966 | orchestrator | 22:05:58.675 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.675995 | orchestrator | 22:05:58.675 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.676040 | orchestrator | 22:05:58.676 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.676099 | orchestrator | 22:05:58.676 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.676149 | orchestrator | 22:05:58.676 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-01 22:05:58.676184 | orchestrator | 22:05:58.676 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.676229 | orchestrator | 22:05:58.676 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.676274 | orchestrator | 22:05:58.676 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.676323 | orchestrator | 22:05:58.676 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.676371 | orchestrator | 22:05:58.676 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.676405 | orchestrator | 22:05:58.676 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.676448 | orchestrator | 22:05:58.676 STDOUT terraform:  + name = "testbed-manager" 2025-06-01 22:05:58.676483 | orchestrator | 22:05:58.676 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.676528 | orchestrator | 22:05:58.676 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.676573 | orchestrator | 22:05:58.676 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.676605 | orchestrator | 22:05:58.676 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.676650 | orchestrator | 22:05:58.676 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.676694 | orchestrator | 22:05:58.676 STDOUT terraform:  + user_data = (known after apply) 2025-06-01 22:05:58.676740 | orchestrator | 22:05:58.676 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.676775 | orchestrator | 22:05:58.676 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.676811 | orchestrator | 22:05:58.676 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.676850 | orchestrator | 22:05:58.676 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.676886 | orchestrator | 22:05:58.676 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.676925 | orchestrator | 22:05:58.676 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.676972 | orchestrator | 22:05:58.676 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.676995 | orchestrator | 22:05:58.676 STDOUT terraform:  } 2025-06-01 22:05:58.677018 | orchestrator | 22:05:58.677 STDOUT terraform:  + network { 2025-06-01 22:05:58.677048 | orchestrator | 22:05:58.677 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.677099 | orchestrator | 22:05:58.677 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.677137 | orchestrator | 22:05:58.677 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.677178 | orchestrator | 22:05:58.677 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.677217 | orchestrator | 22:05:58.677 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.677257 | orchestrator | 22:05:58.677 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.677297 | orchestrator | 22:05:58.677 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.677326 | orchestrator | 22:05:58.677 STDOUT terraform:  } 2025-06-01 22:05:58.677348 | orchestrator | 22:05:58.677 STDOUT terraform:  } 2025-06-01 22:05:58.677400 | orchestrator | 22:05:58.677 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-01 22:05:58.677451 | orchestrator | 22:05:58.677 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:05:58.677495 | orchestrator | 22:05:58.677 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.677539 | orchestrator | 22:05:58.677 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.677584 | orchestrator | 22:05:58.677 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.677627 | orchestrator | 22:05:58.677 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.677660 | orchestrator | 22:05:58.677 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.677688 | orchestrator | 22:05:58.677 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.677732 | orchestrator | 22:05:58.677 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.677774 | orchestrator | 22:05:58.677 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.677813 | orchestrator | 22:05:58.677 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:05:58.677844 | orchestrator | 22:05:58.677 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.677886 | orchestrator | 22:05:58.677 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.677930 | orchestrator | 22:05:58.677 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.677976 | orchestrator | 22:05:58.677 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.678037 | orchestrator | 22:05:58.677 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.678071 | orchestrator | 22:05:58.678 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.678141 | orchestrator | 22:05:58.678 STDOUT terraform:  + name = "testbed-node-0" 2025-06-01 22:05:58.678175 | orchestrator | 22:05:58.678 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.678225 | orchestrator | 22:05:58.678 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.678272 | orchestrator | 22:05:58.678 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.678305 | orchestrator | 22:05:58.678 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.678362 | orchestrator | 22:05:58.678 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.678436 | orchestrator | 22:05:58.678 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:05:58.678473 | orchestrator | 22:05:58.678 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.678507 | orchestrator | 22:05:58.678 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.678542 | orchestrator | 22:05:58.678 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.678578 | orchestrator | 22:05:58.678 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.681814 | orchestrator | 22:05:58.678 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.681966 | orchestrator | 22:05:58.681 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.682052 | orchestrator | 22:05:58.681 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.682134 | orchestrator | 22:05:58.682 STDOUT terraform:  } 2025-06-01 22:05:58.682203 | orchestrator | 22:05:58.682 STDOUT terraform:  + network { 2025-06-01 22:05:58.682235 | orchestrator | 22:05:58.682 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.683267 | orchestrator | 22:05:58.683 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.683335 | orchestrator | 22:05:58.683 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.683377 | orchestrator | 22:05:58.683 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.683419 | orchestrator | 22:05:58.683 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.683459 | orchestrator | 22:05:58.683 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.683561 | orchestrator | 22:05:58.683 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.683593 | orchestrator | 22:05:58.683 STDOUT terraform:  } 2025-06-01 22:05:58.683616 | orchestrator | 22:05:58.683 STDOUT terraform:  } 2025-06-01 22:05:58.683674 | orchestrator | 22:05:58.683 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-01 22:05:58.683725 | orchestrator | 22:05:58.683 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:05:58.683769 | orchestrator | 22:05:58.683 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.683813 | orchestrator | 22:05:58.683 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.683855 | orchestrator | 22:05:58.683 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.683918 | orchestrator | 22:05:58.683 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.683950 | orchestrator | 22:05:58.683 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.684072 | orchestrator | 22:05:58.684 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.684153 | orchestrator | 22:05:58.684 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.684201 | orchestrator | 22:05:58.684 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.684240 | orchestrator | 22:05:58.684 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:05:58.684271 | orchestrator | 22:05:58.684 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.684313 | orchestrator | 22:05:58.684 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.684355 | orchestrator | 22:05:58.684 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.684399 | orchestrator | 22:05:58.684 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.684442 | orchestrator | 22:05:58.684 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.684485 | orchestrator | 22:05:58.684 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.684525 | orchestrator | 22:05:58.684 STDOUT terraform:  + name = "testbed-node-1" 2025-06-01 22:05:58.684608 | orchestrator | 22:05:58.684 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.684656 | orchestrator | 22:05:58.684 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.684699 | orchestrator | 22:05:58.684 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.684735 | orchestrator | 22:05:58.684 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.684780 | orchestrator | 22:05:58.684 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.684842 | orchestrator | 22:05:58.684 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:05:58.684868 | orchestrator | 22:05:58.684 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.684900 | orchestrator | 22:05:58.684 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.684935 | orchestrator | 22:05:58.684 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.684973 | orchestrator | 22:05:58.684 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.685010 | orchestrator | 22:05:58.684 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.685053 | orchestrator | 22:05:58.685 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.685112 | orchestrator | 22:05:58.685 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.685136 | orchestrator | 22:05:58.685 STDOUT terraform:  } 2025-06-01 22:05:58.685158 | orchestrator | 22:05:58.685 STDOUT terraform:  + network { 2025-06-01 22:05:58.685186 | orchestrator | 22:05:58.685 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.685224 | orchestrator | 22:05:58.685 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.685261 | orchestrator | 22:05:58.685 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.685301 | orchestrator | 22:05:58.685 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.685340 | orchestrator | 22:05:58.685 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.685378 | orchestrator | 22:05:58.685 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.685451 | orchestrator | 22:05:58.685 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.685475 | orchestrator | 22:05:58.685 STDOUT terraform:  } 2025-06-01 22:05:58.685497 | orchestrator | 22:05:58.685 STDOUT terraform:  } 2025-06-01 22:05:58.685548 | orchestrator | 22:05:58.685 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-01 22:05:58.685600 | orchestrator | 22:05:58.685 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:05:58.685642 | orchestrator | 22:05:58.685 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.685687 | orchestrator | 22:05:58.685 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.685740 | orchestrator | 22:05:58.685 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.685785 | orchestrator | 22:05:58.685 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.685816 | orchestrator | 22:05:58.685 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.685844 | orchestrator | 22:05:58.685 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.685886 | orchestrator | 22:05:58.685 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.685928 | orchestrator | 22:05:58.685 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.685964 | orchestrator | 22:05:58.685 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:05:58.685994 | orchestrator | 22:05:58.685 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.686073 | orchestrator | 22:05:58.686 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.686132 | orchestrator | 22:05:58.686 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.686176 | orchestrator | 22:05:58.686 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.686219 | orchestrator | 22:05:58.686 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.686251 | orchestrator | 22:05:58.686 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.686289 | orchestrator | 22:05:58.686 STDOUT terraform:  + name = "testbed-node-2" 2025-06-01 22:05:58.686321 | orchestrator | 22:05:58.686 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.686362 | orchestrator | 22:05:58.686 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.686419 | orchestrator | 22:05:58.686 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.686450 | orchestrator | 22:05:58.686 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.686493 | orchestrator | 22:05:58.686 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.686549 | orchestrator | 22:05:58.686 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:05:58.686573 | orchestrator | 22:05:58.686 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.686604 | orchestrator | 22:05:58.686 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.686640 | orchestrator | 22:05:58.686 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.686675 | orchestrator | 22:05:58.686 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.686710 | orchestrator | 22:05:58.686 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.686750 | orchestrator | 22:05:58.686 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.686794 | orchestrator | 22:05:58.686 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.686818 | orchestrator | 22:05:58.686 STDOUT terraform:  } 2025-06-01 22:05:58.686841 | orchestrator | 22:05:58.686 STDOUT terraform:  + network { 2025-06-01 22:05:58.686869 | orchestrator | 22:05:58.686 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.686912 | orchestrator | 22:05:58.686 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.686951 | orchestrator | 22:05:58.686 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.686989 | orchestrator | 22:05:58.686 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.687027 | orchestrator | 22:05:58.686 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.687065 | orchestrator | 22:05:58.687 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.687133 | orchestrator | 22:05:58.687 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.687157 | orchestrator | 22:05:58.687 STDOUT terraform:  } 2025-06-01 22:05:58.687179 | orchestrator | 22:05:58.687 STDOUT terraform:  } 2025-06-01 22:05:58.687229 | orchestrator | 22:05:58.687 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-01 22:05:58.687282 | orchestrator | 22:05:58.687 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:05:58.687324 | orchestrator | 22:05:58.687 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.687368 | orchestrator | 22:05:58.687 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.687417 | orchestrator | 22:05:58.687 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.687461 | orchestrator | 22:05:58.687 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.687491 | orchestrator | 22:05:58.687 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.687519 | orchestrator | 22:05:58.687 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.687561 | orchestrator | 22:05:58.687 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.687604 | orchestrator | 22:05:58.687 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.687640 | orchestrator | 22:05:58.687 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:05:58.687671 | orchestrator | 22:05:58.687 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.687711 | orchestrator | 22:05:58.687 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.687753 | orchestrator | 22:05:58.687 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.687794 | orchestrator | 22:05:58.687 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.687836 | orchestrator | 22:05:58.687 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.687869 | orchestrator | 22:05:58.687 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.687907 | orchestrator | 22:05:58.687 STDOUT terraform:  + name = "testbed-node-3" 2025-06-01 22:05:58.687939 | orchestrator | 22:05:58.687 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.687982 | orchestrator | 22:05:58.687 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.688026 | orchestrator | 22:05:58.687 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.688059 | orchestrator | 22:05:58.688 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.688123 | orchestrator | 22:05:58.688 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.688181 | orchestrator | 22:05:58.688 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:05:58.688205 | orchestrator | 22:05:58.688 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.688237 | orchestrator | 22:05:58.688 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.688271 | orchestrator | 22:05:58.688 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.688308 | orchestrator | 22:05:58.688 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.688342 | orchestrator | 22:05:58.688 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.688379 | orchestrator | 22:05:58.688 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.688448 | orchestrator | 22:05:58.688 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.688470 | orchestrator | 22:05:58.688 STDOUT terraform:  } 2025-06-01 22:05:58.688493 | orchestrator | 22:05:58.688 STDOUT terraform:  + network { 2025-06-01 22:05:58.688520 | orchestrator | 22:05:58.688 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.688557 | orchestrator | 22:05:58.688 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.688594 | orchestrator | 22:05:58.688 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.688633 | orchestrator | 22:05:58.688 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.688671 | orchestrator | 22:05:58.688 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.688709 | orchestrator | 22:05:58.688 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.688748 | orchestrator | 22:05:58.688 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.688771 | orchestrator | 22:05:58.688 STDOUT terraform:  } 2025-06-01 22:05:58.688793 | orchestrator | 22:05:58.688 STDOUT terraform:  } 2025-06-01 22:05:58.688842 | orchestrator | 22:05:58.688 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-01 22:05:58.688891 | orchestrator | 22:05:58.688 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:05:58.688933 | orchestrator | 22:05:58.688 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.688976 | orchestrator | 22:05:58.688 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.689019 | orchestrator | 22:05:58.688 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.689061 | orchestrator | 22:05:58.689 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.689103 | orchestrator | 22:05:58.689 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.689131 | orchestrator | 22:05:58.689 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.689173 | orchestrator | 22:05:58.689 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.689214 | orchestrator | 22:05:58.689 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.689256 | orchestrator | 22:05:58.689 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:05:58.689286 | orchestrator | 22:05:58.689 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.689328 | orchestrator | 22:05:58.689 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.689371 | orchestrator | 22:05:58.689 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.689413 | orchestrator | 22:05:58.689 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.689454 | orchestrator | 22:05:58.689 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.689487 | orchestrator | 22:05:58.689 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.689524 | orchestrator | 22:05:58.689 STDOUT terraform:  + name = "testbed-node-4" 2025-06-01 22:05:58.689556 | orchestrator | 22:05:58.689 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.689597 | orchestrator | 22:05:58.689 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.689638 | orchestrator | 22:05:58.689 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.689668 | orchestrator | 22:05:58.689 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.689710 | orchestrator | 22:05:58.689 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.689766 | orchestrator | 22:05:58.689 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:05:58.689790 | orchestrator | 22:05:58.689 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.689821 | orchestrator | 22:05:58.689 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.689855 | orchestrator | 22:05:58.689 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.689892 | orchestrator | 22:05:58.689 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.689931 | orchestrator | 22:05:58.689 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.689970 | orchestrator | 22:05:58.689 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.690030 | orchestrator | 22:05:58.689 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.690056 | orchestrator | 22:05:58.690 STDOUT terraform:  } 2025-06-01 22:05:58.690078 | orchestrator | 22:05:58.690 STDOUT terraform:  + network { 2025-06-01 22:05:58.690134 | orchestrator | 22:05:58.690 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.690175 | orchestrator | 22:05:58.690 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.690215 | orchestrator | 22:05:58.690 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.690255 | orchestrator | 22:05:58.690 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.690294 | orchestrator | 22:05:58.690 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.690334 | orchestrator | 22:05:58.690 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.690372 | orchestrator | 22:05:58.690 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.690401 | orchestrator | 22:05:58.690 STDOUT terraform:  } 2025-06-01 22:05:58.690423 | orchestrator | 22:05:58.690 STDOUT terraform:  } 2025-06-01 22:05:58.690473 | orchestrator | 22:05:58.690 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-01 22:05:58.690522 | orchestrator | 22:05:58.690 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-01 22:05:58.690564 | orchestrator | 22:05:58.690 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-01 22:05:58.690609 | orchestrator | 22:05:58.690 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-01 22:05:58.690652 | orchestrator | 22:05:58.690 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-01 22:05:58.690695 | orchestrator | 22:05:58.690 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.690727 | orchestrator | 22:05:58.690 STDOUT terraform:  + availability_zone = "nova" 2025-06-01 22:05:58.690756 | orchestrator | 22:05:58.690 STDOUT terraform:  + config_drive = true 2025-06-01 22:05:58.690801 | orchestrator | 22:05:58.690 STDOUT terraform:  + created = (known after apply) 2025-06-01 22:05:58.690845 | orchestrator | 22:05:58.690 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-01 22:05:58.690881 | orchestrator | 22:05:58.690 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-01 22:05:58.690911 | orchestrator | 22:05:58.690 STDOUT terraform:  + force_delete = false 2025-06-01 22:05:58.690953 | orchestrator | 22:05:58.690 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-01 22:05:58.690997 | orchestrator | 22:05:58.690 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.691046 | orchestrator | 22:05:58.691 STDOUT terraform:  + image_id = (known after apply) 2025-06-01 22:05:58.691100 | orchestrator | 22:05:58.691 STDOUT terraform:  + image_name = (known after apply) 2025-06-01 22:05:58.691132 | orchestrator | 22:05:58.691 STDOUT terraform:  + key_pair = "testbed" 2025-06-01 22:05:58.691171 | orchestrator | 22:05:58.691 STDOUT terraform:  + name = "testbed-node-5" 2025-06-01 22:05:58.691203 | orchestrator | 22:05:58.691 STDOUT terraform:  + power_state = "active" 2025-06-01 22:05:58.691245 | orchestrator | 22:05:58.691 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.691286 | orchestrator | 22:05:58.691 STDOUT terraform:  + security_groups = (known after apply) 2025-06-01 22:05:58.691317 | orchestrator | 22:05:58.691 STDOUT terraform:  + stop_before_destroy = false 2025-06-01 22:05:58.691358 | orchestrator | 22:05:58.691 STDOUT terraform:  + updated = (known after apply) 2025-06-01 22:05:58.691414 | orchestrator | 22:05:58.691 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-01 22:05:58.691437 | orchestrator | 22:05:58.691 STDOUT terraform:  + block_device { 2025-06-01 22:05:58.691468 | orchestrator | 22:05:58.691 STDOUT terraform:  + boot_index = 0 2025-06-01 22:05:58.691502 | orchestrator | 22:05:58.691 STDOUT terraform:  + delete_on_termination = false 2025-06-01 22:05:58.691538 | orchestrator | 22:05:58.691 STDOUT terraform:  + destination_type = "volume" 2025-06-01 22:05:58.691586 | orchestrator | 22:05:58.691 STDOUT terraform:  + multiattach = false 2025-06-01 22:05:58.691622 | orchestrator | 22:05:58.691 STDOUT terraform:  + source_type = "volume" 2025-06-01 22:05:58.691666 | orchestrator | 22:05:58.691 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.691688 | orchestrator | 22:05:58.691 STDOUT terraform:  } 2025-06-01 22:05:58.691713 | orchestrator | 22:05:58.691 STDOUT terraform:  + network { 2025-06-01 22:05:58.691741 | orchestrator | 22:05:58.691 STDOUT terraform:  + access_network = false 2025-06-01 22:05:58.691778 | orchestrator | 22:05:58.691 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-01 22:05:58.691816 | orchestrator | 22:05:58.691 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-01 22:05:58.691857 | orchestrator | 22:05:58.691 STDOUT terraform:  + mac = (known after apply) 2025-06-01 22:05:58.691895 | orchestrator | 22:05:58.691 STDOUT terraform:  + name = (known after apply) 2025-06-01 22:05:58.691933 | orchestrator | 22:05:58.691 STDOUT terraform:  + port = (known after apply) 2025-06-01 22:05:58.692976 | orchestrator | 22:05:58.691 STDOUT terraform:  + uuid = (known after apply) 2025-06-01 22:05:58.693010 | orchestrator | 22:05:58.692 STDOUT terraform:  } 2025-06-01 22:05:58.693032 | orchestrator | 22:05:58.693 STDOUT terraform:  } 2025-06-01 22:05:58.693077 | orchestrator | 22:05:58.693 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-01 22:05:58.693145 | orchestrator | 22:05:58.693 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-01 22:05:58.693184 | orchestrator | 22:05:58.693 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-01 22:05:58.693223 | orchestrator | 22:05:58.693 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.693253 | orchestrator | 22:05:58.693 STDOUT terraform:  + name = "testbed" 2025-06-01 22:05:58.693285 | orchestrator | 22:05:58.693 STDOUT terraform:  + private_key = (sensitive value) 2025-06-01 22:05:58.693320 | orchestrator | 22:05:58.693 STDOUT terraform:  + public_key = (known after apply) 2025-06-01 22:05:58.693358 | orchestrator | 22:05:58.693 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.693395 | orchestrator | 22:05:58.693 STDOUT terraform:  + user_id = (known after apply) 2025-06-01 22:05:58.693417 | orchestrator | 22:05:58.693 STDOUT terraform:  } 2025-06-01 22:05:58.693492 | orchestrator | 22:05:58.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-01 22:05:58.693550 | orchestrator | 22:05:58.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.693604 | orchestrator | 22:05:58.693 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.693641 | orchestrator | 22:05:58.693 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.693676 | orchestrator | 22:05:58.693 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.693712 | orchestrator | 22:05:58.693 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.693747 | orchestrator | 22:05:58.693 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.693775 | orchestrator | 22:05:58.693 STDOUT terraform:  } 2025-06-01 22:05:58.693832 | orchestrator | 22:05:58.693 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-01 22:05:58.693892 | orchestrator | 22:05:58.693 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.693929 | orchestrator | 22:05:58.693 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.693965 | orchestrator | 22:05:58.693 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.694002 | orchestrator | 22:05:58.693 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.694061 | orchestrator | 22:05:58.694 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.694112 | orchestrator | 22:05:58.694 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.694134 | orchestrator | 22:05:58.694 STDOUT terraform:  } 2025-06-01 22:05:58.694193 | orchestrator | 22:05:58.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-01 22:05:58.694250 | orchestrator | 22:05:58.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.694286 | orchestrator | 22:05:58.694 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.694324 | orchestrator | 22:05:58.694 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.694361 | orchestrator | 22:05:58.694 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.694401 | orchestrator | 22:05:58.694 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.694437 | orchestrator | 22:05:58.694 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.694459 | orchestrator | 22:05:58.694 STDOUT terraform:  } 2025-06-01 22:05:58.694518 | orchestrator | 22:05:58.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-01 22:05:58.694577 | orchestrator | 22:05:58.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.694622 | orchestrator | 22:05:58.694 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.694664 | orchestrator | 22:05:58.694 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.694700 | orchestrator | 22:05:58.694 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.694737 | orchestrator | 22:05:58.694 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.694774 | orchestrator | 22:05:58.694 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.694795 | orchestrator | 22:05:58.694 STDOUT terraform:  } 2025-06-01 22:05:58.694852 | orchestrator | 22:05:58.694 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-01 22:05:58.694909 | orchestrator | 22:05:58.694 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.694945 | orchestrator | 22:05:58.694 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.694981 | orchestrator | 22:05:58.694 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.695027 | orchestrator | 22:05:58.694 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.695063 | orchestrator | 22:05:58.695 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.695116 | orchestrator | 22:05:58.695 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.695139 | orchestrator | 22:05:58.695 STDOUT terraform:  } 2025-06-01 22:05:58.695200 | orchestrator | 22:05:58.695 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-01 22:05:58.695256 | orchestrator | 22:05:58.695 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.695292 | orchestrator | 22:05:58.695 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.695328 | orchestrator | 22:05:58.695 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.695363 | orchestrator | 22:05:58.695 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.695399 | orchestrator | 22:05:58.695 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.695434 | orchestrator | 22:05:58.695 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.695455 | orchestrator | 22:05:58.695 STDOUT terraform:  } 2025-06-01 22:05:58.695511 | orchestrator | 22:05:58.695 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-01 22:05:58.695568 | orchestrator | 22:05:58.695 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.695603 | orchestrator | 22:05:58.695 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.695642 | orchestrator | 22:05:58.695 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.695677 | orchestrator | 22:05:58.695 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.695714 | orchestrator | 22:05:58.695 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.695749 | orchestrator | 22:05:58.695 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.695773 | orchestrator | 22:05:58.695 STDOUT terraform:  } 2025-06-01 22:05:58.695828 | orchestrator | 22:05:58.695 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-01 22:05:58.695883 | orchestrator | 22:05:58.695 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.695919 | orchestrator | 22:05:58.695 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.695956 | orchestrator | 22:05:58.695 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.695991 | orchestrator | 22:05:58.695 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.696026 | orchestrator | 22:05:58.695 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.696063 | orchestrator | 22:05:58.696 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.696093 | orchestrator | 22:05:58.696 STDOUT terraform:  } 2025-06-01 22:05:58.696152 | orchestrator | 22:05:58.696 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-01 22:05:58.696212 | orchestrator | 22:05:58.696 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-01 22:05:58.696249 | orchestrator | 22:05:58.696 STDOUT terraform:  + device = (known after apply) 2025-06-01 22:05:58.696286 | orchestrator | 22:05:58.696 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.696320 | orchestrator | 22:05:58.696 STDOUT terraform:  + instance_id = (known after apply) 2025-06-01 22:05:58.696356 | orchestrator | 22:05:58.696 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.696391 | orchestrator | 22:05:58.696 STDOUT terraform:  + volume_id = (known after apply) 2025-06-01 22:05:58.696411 | orchestrator | 22:05:58.696 STDOUT terraform:  } 2025-06-01 22:05:58.696476 | orchestrator | 22:05:58.696 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-01 22:05:58.696539 | orchestrator | 22:05:58.696 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-01 22:05:58.696575 | orchestrator | 22:05:58.696 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-01 22:05:58.696612 | orchestrator | 22:05:58.696 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-01 22:05:58.696650 | orchestrator | 22:05:58.696 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.696688 | orchestrator | 22:05:58.696 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 22:05:58.696722 | orchestrator | 22:05:58.696 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.696743 | orchestrator | 22:05:58.696 STDOUT terraform:  } 2025-06-01 22:05:58.696795 | orchestrator | 22:05:58.696 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-01 22:05:58.696849 | orchestrator | 22:05:58.696 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-01 22:05:58.696881 | orchestrator | 22:05:58.696 STDOUT terraform:  + address = (known after apply) 2025-06-01 22:05:58.696913 | orchestrator | 22:05:58.696 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.696944 | orchestrator | 22:05:58.696 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-01 22:05:58.696975 | orchestrator | 22:05:58.696 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.697007 | orchestrator | 22:05:58.696 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-01 22:05:58.697039 | orchestrator | 22:05:58.697 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.697067 | orchestrator | 22:05:58.697 STDOUT terraform:  + pool = "public" 2025-06-01 22:05:58.697126 | orchestrator | 22:05:58.697 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 22:05:58.697163 | orchestrator | 22:05:58.697 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.697199 | orchestrator | 22:05:58.697 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.697234 | orchestrator | 22:05:58.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.697254 | orchestrator | 22:05:58.697 STDOUT terraform:  } 2025-06-01 22:05:58.697309 | orchestrator | 22:05:58.697 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-01 22:05:58.697365 | orchestrator | 22:05:58.697 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-01 22:05:58.697412 | orchestrator | 22:05:58.697 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.697456 | orchestrator | 22:05:58.697 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.697486 | orchestrator | 22:05:58.697 STDOUT terraform:  + availability_zone_hints = [ 2025-06-01 22:05:58.697508 | orchestrator | 22:05:58.697 STDOUT terraform:  + "nova", 2025-06-01 22:05:58.697528 | orchestrator | 22:05:58.697 STDOUT terraform:  ] 2025-06-01 22:05:58.697572 | orchestrator | 22:05:58.697 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-01 22:05:58.697618 | orchestrator | 22:05:58.697 STDOUT terraform:  + external = (known after apply) 2025-06-01 22:05:58.697662 | orchestrator | 22:05:58.697 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.697706 | orchestrator | 22:05:58.697 STDOUT terraform:  + mtu = (known after apply) 2025-06-01 22:05:58.697753 | orchestrator | 22:05:58.697 STDOUT terraform:  + name = "net-testbed-management" 2025-06-01 22:05:58.697797 | orchestrator | 22:05:58.697 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.697840 | orchestrator | 22:05:58.697 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.697883 | orchestrator | 22:05:58.697 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.697927 | orchestrator | 22:05:58.697 STDOUT terraform:  + shared = (known after apply) 2025-06-01 22:05:58.697970 | orchestrator | 22:05:58.697 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.698028 | orchestrator | 22:05:58.697 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-01 22:05:58.698066 | orchestrator | 22:05:58.698 STDOUT terraform:  + segments (known after apply) 2025-06-01 22:05:58.698100 | orchestrator | 22:05:58.698 STDOUT terraform:  } 2025-06-01 22:05:58.698153 | orchestrator | 22:05:58.698 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-01 22:05:58.698206 | orchestrator | 22:05:58.698 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-01 22:05:58.698248 | orchestrator | 22:05:58.698 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.698291 | orchestrator | 22:05:58.698 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.698332 | orchestrator | 22:05:58.698 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.698376 | orchestrator | 22:05:58.698 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.698421 | orchestrator | 22:05:58.698 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.698465 | orchestrator | 22:05:58.698 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.698507 | orchestrator | 22:05:58.698 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.698549 | orchestrator | 22:05:58.698 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.698598 | orchestrator | 22:05:58.698 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.698641 | orchestrator | 22:05:58.698 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.698683 | orchestrator | 22:05:58.698 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.698725 | orchestrator | 22:05:58.698 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.698769 | orchestrator | 22:05:58.698 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.698814 | orchestrator | 22:05:58.698 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.698856 | orchestrator | 22:05:58.698 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.698898 | orchestrator | 22:05:58.698 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.698926 | orchestrator | 22:05:58.698 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.698965 | orchestrator | 22:05:58.698 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.698987 | orchestrator | 22:05:58.698 STDOUT terraform:  } 2025-06-01 22:05:58.699014 | orchestrator | 22:05:58.698 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.699051 | orchestrator | 22:05:58.699 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.699072 | orchestrator | 22:05:58.699 STDOUT terraform:  } 2025-06-01 22:05:58.699143 | orchestrator | 22:05:58.699 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.699184 | orchestrator | 22:05:58.699 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.699219 | orchestrator | 22:05:58.699 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-01 22:05:58.699256 | orchestrator | 22:05:58.699 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.699278 | orchestrator | 22:05:58.699 STDOUT terraform:  } 2025-06-01 22:05:58.699299 | orchestrator | 22:05:58.699 STDOUT terraform:  } 2025-06-01 22:05:58.699356 | orchestrator | 22:05:58.699 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-01 22:05:58.699410 | orchestrator | 22:05:58.699 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:05:58.699455 | orchestrator | 22:05:58.699 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.699501 | orchestrator | 22:05:58.699 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.699564 | orchestrator | 22:05:58.699 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.699609 | orchestrator | 22:05:58.699 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.699654 | orchestrator | 22:05:58.699 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.699698 | orchestrator | 22:05:58.699 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.699743 | orchestrator | 22:05:58.699 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.699792 | orchestrator | 22:05:58.699 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.699836 | orchestrator | 22:05:58.699 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.699881 | orchestrator | 22:05:58.699 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.699925 | orchestrator | 22:05:58.699 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.699968 | orchestrator | 22:05:58.699 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.700011 | orchestrator | 22:05:58.699 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.700055 | orchestrator | 22:05:58.700 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.700110 | orchestrator | 22:05:58.700 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.700156 | orchestrator | 22:05:58.700 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.700184 | orchestrator | 22:05:58.700 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.700222 | orchestrator | 22:05:58.700 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.700243 | orchestrator | 22:05:58.700 STDOUT terraform:  } 2025-06-01 22:05:58.700270 | orchestrator | 22:05:58.700 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.700307 | orchestrator | 22:05:58.700 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:05:58.700329 | orchestrator | 22:05:58.700 STDOUT terraform:  } 2025-06-01 22:05:58.700376 | orchestrator | 22:05:58.700 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.700419 | orchestrator | 22:05:58.700 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.700443 | orchestrator | 22:05:58.700 STDOUT terraform:  } 2025-06-01 22:05:58.700470 | orchestrator | 22:05:58.700 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.700505 | orchestrator | 22:05:58.700 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:05:58.700528 | orchestrator | 22:05:58.700 STDOUT terraform:  } 2025-06-01 22:05:58.700560 | orchestrator | 22:05:58.700 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.700583 | orchestrator | 22:05:58.700 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.700615 | orchestrator | 22:05:58.700 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-01 22:05:58.700653 | orchestrator | 22:05:58.700 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.700673 | orchestrator | 22:05:58.700 STDOUT terraform:  } 2025-06-01 22:05:58.700693 | orchestrator | 22:05:58.700 STDOUT terraform:  } 2025-06-01 22:05:58.700748 | orchestrator | 22:05:58.700 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-01 22:05:58.700799 | orchestrator | 22:05:58.700 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:05:58.700842 | orchestrator | 22:05:58.700 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.700884 | orchestrator | 22:05:58.700 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.700930 | orchestrator | 22:05:58.700 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.700976 | orchestrator | 22:05:58.700 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.701020 | orchestrator | 22:05:58.700 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.701063 | orchestrator | 22:05:58.701 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.701124 | orchestrator | 22:05:58.701 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.701171 | orchestrator | 22:05:58.701 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.701214 | orchestrator | 22:05:58.701 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.701257 | orchestrator | 22:05:58.701 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.707692 | orchestrator | 22:05:58.701 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.707752 | orchestrator | 22:05:58.707 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.707758 | orchestrator | 22:05:58.707 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.707762 | orchestrator | 22:05:58.707 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.707808 | orchestrator | 22:05:58.707 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.707853 | orchestrator | 22:05:58.707 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.707861 | orchestrator | 22:05:58.707 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.707893 | orchestrator | 22:05:58.707 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.707913 | orchestrator | 22:05:58.707 STDOUT terraform:  } 2025-06-01 22:05:58.707935 | orchestrator | 22:05:58.707 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.707974 | orchestrator | 22:05:58.707 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:05:58.707981 | orchestrator | 22:05:58.707 STDOUT terraform:  } 2025-06-01 22:05:58.707999 | orchestrator | 22:05:58.707 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.708027 | orchestrator | 22:05:58.707 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.708049 | orchestrator | 22:05:58.708 STDOUT terraform:  } 2025-06-01 22:05:58.708069 | orchestrator | 22:05:58.708 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.708129 | orchestrator | 22:05:58.708 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:05:58.708145 | orchestrator | 22:05:58.708 STDOUT terraform:  } 2025-06-01 22:05:58.708163 | orchestrator | 22:05:58.708 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.708177 | orchestrator | 22:05:58.708 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.708205 | orchestrator | 22:05:58.708 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-01 22:05:58.708234 | orchestrator | 22:05:58.708 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.708255 | orchestrator | 22:05:58.708 STDOUT terraform:  } 2025-06-01 22:05:58.708271 | orchestrator | 22:05:58.708 STDOUT terraform:  } 2025-06-01 22:05:58.708325 | orchestrator | 22:05:58.708 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-01 22:05:58.708371 | orchestrator | 22:05:58.708 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:05:58.708407 | orchestrator | 22:05:58.708 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.708444 | orchestrator | 22:05:58.708 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.708479 | orchestrator | 22:05:58.708 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.708531 | orchestrator | 22:05:58.708 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.708568 | orchestrator | 22:05:58.708 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.708605 | orchestrator | 22:05:58.708 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.708640 | orchestrator | 22:05:58.708 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.708682 | orchestrator | 22:05:58.708 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.708723 | orchestrator | 22:05:58.708 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.708755 | orchestrator | 22:05:58.708 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.708791 | orchestrator | 22:05:58.708 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.708833 | orchestrator | 22:05:58.708 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.708868 | orchestrator | 22:05:58.708 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.708905 | orchestrator | 22:05:58.708 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.708947 | orchestrator | 22:05:58.708 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.708983 | orchestrator | 22:05:58.708 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.709005 | orchestrator | 22:05:58.708 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.709034 | orchestrator | 22:05:58.709 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.709040 | orchestrator | 22:05:58.709 STDOUT terraform:  } 2025-06-01 22:05:58.709064 | orchestrator | 22:05:58.709 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.709109 | orchestrator | 22:05:58.709 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:05:58.709116 | orchestrator | 22:05:58.709 STDOUT terraform:  } 2025-06-01 22:05:58.709138 | orchestrator | 22:05:58.709 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.709168 | orchestrator | 22:05:58.709 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.709174 | orchestrator | 22:05:58.709 STDOUT terraform:  } 2025-06-01 22:05:58.709204 | orchestrator | 22:05:58.709 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.709234 | orchestrator | 22:05:58.709 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:05:58.709247 | orchestrator | 22:05:58.709 STDOUT terraform:  } 2025-06-01 22:05:58.709273 | orchestrator | 22:05:58.709 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.709290 | orchestrator | 22:05:58.709 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.709314 | orchestrator | 22:05:58.709 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-01 22:05:58.709363 | orchestrator | 22:05:58.709 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.709377 | orchestrator | 22:05:58.709 STDOUT terraform:  } 2025-06-01 22:05:58.709391 | orchestrator | 22:05:58.709 STDOUT terraform:  } 2025-06-01 22:05:58.709442 | orchestrator | 22:05:58.709 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-01 22:05:58.709489 | orchestrator | 22:05:58.709 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:05:58.709530 | orchestrator | 22:05:58.709 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.709567 | orchestrator | 22:05:58.709 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.709602 | orchestrator | 22:05:58.709 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.709638 | orchestrator | 22:05:58.709 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.709683 | orchestrator | 22:05:58.709 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.709719 | orchestrator | 22:05:58.709 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.709755 | orchestrator | 22:05:58.709 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.709796 | orchestrator | 22:05:58.709 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.709835 | orchestrator | 22:05:58.709 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.709870 | orchestrator | 22:05:58.709 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.709919 | orchestrator | 22:05:58.709 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.709969 | orchestrator | 22:05:58.709 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.710006 | orchestrator | 22:05:58.709 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.710061 | orchestrator | 22:05:58.710 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.710118 | orchestrator | 22:05:58.710 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.710145 | orchestrator | 22:05:58.710 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.710166 | orchestrator | 22:05:58.710 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.710196 | orchestrator | 22:05:58.710 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.710211 | orchestrator | 22:05:58.710 STDOUT terraform:  } 2025-06-01 22:05:58.710230 | orchestrator | 22:05:58.710 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.710260 | orchestrator | 22:05:58.710 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:05:58.710272 | orchestrator | 22:05:58.710 STDOUT terraform:  } 2025-06-01 22:05:58.710289 | orchestrator | 22:05:58.710 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.710329 | orchestrator | 22:05:58.710 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.710349 | orchestrator | 22:05:58.710 STDOUT terraform:  } 2025-06-01 22:05:58.710370 | orchestrator | 22:05:58.710 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.710399 | orchestrator | 22:05:58.710 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:05:58.710414 | orchestrator | 22:05:58.710 STDOUT terraform:  } 2025-06-01 22:05:58.710444 | orchestrator | 22:05:58.710 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.710460 | orchestrator | 22:05:58.710 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.710485 | orchestrator | 22:05:58.710 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-01 22:05:58.710514 | orchestrator | 22:05:58.710 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.710533 | orchestrator | 22:05:58.710 STDOUT terraform:  } 2025-06-01 22:05:58.710548 | orchestrator | 22:05:58.710 STDOUT terraform:  } 2025-06-01 22:05:58.710594 | orchestrator | 22:05:58.710 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-01 22:05:58.710646 | orchestrator | 22:05:58.710 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:05:58.710692 | orchestrator | 22:05:58.710 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.710728 | orchestrator | 22:05:58.710 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.710762 | orchestrator | 22:05:58.710 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.710802 | orchestrator | 22:05:58.710 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.710839 | orchestrator | 22:05:58.710 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.710876 | orchestrator | 22:05:58.710 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.710910 | orchestrator | 22:05:58.710 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.710947 | orchestrator | 22:05:58.710 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.710990 | orchestrator | 22:05:58.710 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.711027 | orchestrator | 22:05:58.710 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.711062 | orchestrator | 22:05:58.711 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.711110 | orchestrator | 22:05:58.711 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.711151 | orchestrator | 22:05:58.711 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.711187 | orchestrator | 22:05:58.711 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.711226 | orchestrator | 22:05:58.711 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.711285 | orchestrator | 22:05:58.711 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.711308 | orchestrator | 22:05:58.711 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.711337 | orchestrator | 22:05:58.711 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.711351 | orchestrator | 22:05:58.711 STDOUT terraform:  } 2025-06-01 22:05:58.711377 | orchestrator | 22:05:58.711 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.711406 | orchestrator | 22:05:58.711 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:05:58.711421 | orchestrator | 22:05:58.711 STDOUT terraform:  } 2025-06-01 22:05:58.711442 | orchestrator | 22:05:58.711 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.711470 | orchestrator | 22:05:58.711 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.711484 | orchestrator | 22:05:58.711 STDOUT terraform:  } 2025-06-01 22:05:58.711504 | orchestrator | 22:05:58.711 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.711532 | orchestrator | 22:05:58.711 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:05:58.711546 | orchestrator | 22:05:58.711 STDOUT terraform:  } 2025-06-01 22:05:58.711574 | orchestrator | 22:05:58.711 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.711590 | orchestrator | 22:05:58.711 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.711616 | orchestrator | 22:05:58.711 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-01 22:05:58.711645 | orchestrator | 22:05:58.711 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.711665 | orchestrator | 22:05:58.711 STDOUT terraform:  } 2025-06-01 22:05:58.711680 | orchestrator | 22:05:58.711 STDOUT terraform:  } 2025-06-01 22:05:58.711727 | orchestrator | 22:05:58.711 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-01 22:05:58.711771 | orchestrator | 22:05:58.711 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-01 22:05:58.711816 | orchestrator | 22:05:58.711 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.711849 | orchestrator | 22:05:58.711 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-01 22:05:58.711884 | orchestrator | 22:05:58.711 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-01 22:05:58.711921 | orchestrator | 22:05:58.711 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.711957 | orchestrator | 22:05:58.711 STDOUT terraform:  + device_id = (known after apply) 2025-06-01 22:05:58.711998 | orchestrator | 22:05:58.711 STDOUT terraform:  + device_owner = (known after apply) 2025-06-01 22:05:58.712033 | orchestrator | 22:05:58.711 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-01 22:05:58.712081 | orchestrator | 22:05:58.712 STDOUT terraform:  + dns_name = (known after apply) 2025-06-01 22:05:58.712144 | orchestrator | 22:05:58.712 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.712182 | orchestrator | 22:05:58.712 STDOUT terraform:  + mac_address = (known after apply) 2025-06-01 22:05:58.712228 | orchestrator | 22:05:58.712 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.712274 | orchestrator | 22:05:58.712 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-01 22:05:58.712317 | orchestrator | 22:05:58.712 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-01 22:05:58.712355 | orchestrator | 22:05:58.712 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.712391 | orchestrator | 22:05:58.712 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-01 22:05:58.712427 | orchestrator | 22:05:58.712 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.712450 | orchestrator | 22:05:58.712 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.712476 | orchestrator | 22:05:58.712 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-01 22:05:58.712500 | orchestrator | 22:05:58.712 STDOUT terraform:  } 2025-06-01 22:05:58.712520 | orchestrator | 22:05:58.712 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.712550 | orchestrator | 22:05:58.712 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-01 22:05:58.712564 | orchestrator | 22:05:58.712 STDOUT terraform:  } 2025-06-01 22:05:58.712585 | orchestrator | 22:05:58.712 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.712619 | orchestrator | 22:05:58.712 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-01 22:05:58.712634 | orchestrator | 22:05:58.712 STDOUT terraform:  } 2025-06-01 22:05:58.712654 | orchestrator | 22:05:58.712 STDOUT terraform:  + allowed_address_pairs { 2025-06-01 22:05:58.712681 | orchestrator | 22:05:58.712 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-01 22:05:58.712696 | orchestrator | 22:05:58.712 STDOUT terraform:  } 2025-06-01 22:05:58.712719 | orchestrator | 22:05:58.712 STDOUT terraform:  + binding (known after apply) 2025-06-01 22:05:58.712734 | orchestrator | 22:05:58.712 STDOUT terraform:  + fixed_ip { 2025-06-01 22:05:58.712759 | orchestrator | 22:05:58.712 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-01 22:05:58.712788 | orchestrator | 22:05:58.712 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.712809 | orchestrator | 22:05:58.712 STDOUT terraform:  } 2025-06-01 22:05:58.712823 | orchestrator | 22:05:58.712 STDOUT terraform:  } 2025-06-01 22:05:58.712876 | orchestrator | 22:05:58.712 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-01 22:05:58.712941 | orchestrator | 22:05:58.712 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-01 22:05:58.712967 | orchestrator | 22:05:58.712 STDOUT terraform:  + force_destroy = false 2025-06-01 22:05:58.712998 | orchestrator | 22:05:58.712 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.713032 | orchestrator | 22:05:58.712 STDOUT terraform:  + port_id = (known after apply) 2025-06-01 22:05:58.713062 | orchestrator | 22:05:58.713 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.713117 | orchestrator | 22:05:58.713 STDOUT terraform:  + router_id = (known after apply) 2025-06-01 22:05:58.713129 | orchestrator | 22:05:58.713 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-01 22:05:58.713145 | orchestrator | 22:05:58.713 STDOUT terraform:  } 2025-06-01 22:05:58.713186 | orchestrator | 22:05:58.713 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-01 22:05:58.713223 | orchestrator | 22:05:58.713 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-01 22:05:58.713259 | orchestrator | 22:05:58.713 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-01 22:05:58.713295 | orchestrator | 22:05:58.713 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.713319 | orchestrator | 22:05:58.713 STDOUT terraform:  + availability_zone_hints = [ 2025-06-01 22:05:58.713334 | orchestrator | 22:05:58.713 STDOUT terraform:  + "nova", 2025-06-01 22:05:58.713348 | orchestrator | 22:05:58.713 STDOUT terraform:  ] 2025-06-01 22:05:58.713393 | orchestrator | 22:05:58.713 STDOUT terraform:  + distributed = (known after apply) 2025-06-01 22:05:58.713428 | orchestrator | 22:05:58.713 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-01 22:05:58.713478 | orchestrator | 22:05:58.713 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-01 22:05:58.713515 | orchestrator | 22:05:58.713 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.713551 | orchestrator | 22:05:58.713 STDOUT terraform:  + name = "testbed" 2025-06-01 22:05:58.713595 | orchestrator | 22:05:58.713 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.713632 | orchestrator | 22:05:58.713 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.713661 | orchestrator | 22:05:58.713 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-01 22:05:58.713680 | orchestrator | 22:05:58.713 STDOUT terraform:  } 2025-06-01 22:05:58.713737 | orchestrator | 22:05:58.713 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-01 22:05:58.713796 | orchestrator | 22:05:58.713 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-01 22:05:58.713816 | orchestrator | 22:05:58.713 STDOUT terraform:  + description = "ssh" 2025-06-01 22:05:58.713840 | orchestrator | 22:05:58.713 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.713884 | orchestrator | 22:05:58.713 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.713917 | orchestrator | 22:05:58.713 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.713938 | orchestrator | 22:05:58.713 STDOUT terraform:  + port_range_max = 22 2025-06-01 22:05:58.713958 | orchestrator | 22:05:58.713 STDOUT terraform:  + port_range_min = 22 2025-06-01 22:05:58.713979 | orchestrator | 22:05:58.713 STDOUT terraform:  + protocol = "tcp" 2025-06-01 22:05:58.714010 | orchestrator | 22:05:58.713 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.714056 | orchestrator | 22:05:58.714 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.714096 | orchestrator | 22:05:58.714 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.714131 | orchestrator | 22:05:58.714 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.714162 | orchestrator | 22:05:58.714 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.714175 | orchestrator | 22:05:58.714 STDOUT terraform:  } 2025-06-01 22:05:58.714229 | orchestrator | 22:05:58.714 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-01 22:05:58.714282 | orchestrator | 22:05:58.714 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-01 22:05:58.714307 | orchestrator | 22:05:58.714 STDOUT terraform:  + description = "wireguard" 2025-06-01 22:05:58.714332 | orchestrator | 22:05:58.714 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.714353 | orchestrator | 22:05:58.714 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.714393 | orchestrator | 22:05:58.714 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.714414 | orchestrator | 22:05:58.714 STDOUT terraform:  + port_range_max = 51820 2025-06-01 22:05:58.714435 | orchestrator | 22:05:58.714 STDOUT terraform:  + port_range_min = 51820 2025-06-01 22:05:58.714457 | orchestrator | 22:05:58.714 STDOUT terraform:  + protocol = "udp" 2025-06-01 22:05:58.714490 | orchestrator | 22:05:58.714 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.714519 | orchestrator | 22:05:58.714 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.714544 | orchestrator | 22:05:58.714 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.714574 | orchestrator | 22:05:58.714 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.714606 | orchestrator | 22:05:58.714 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.714618 | orchestrator | 22:05:58.714 STDOUT terraform:  } 2025-06-01 22:05:58.714670 | orchestrator | 22:05:58.714 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-01 22:05:58.714723 | orchestrator | 22:05:58.714 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-01 22:05:58.714749 | orchestrator | 22:05:58.714 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.714770 | orchestrator | 22:05:58.714 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.714808 | orchestrator | 22:05:58.714 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.714826 | orchestrator | 22:05:58.714 STDOUT terraform:  + protocol = "tcp" 2025-06-01 22:05:58.714856 | orchestrator | 22:05:58.714 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.714887 | orchestrator | 22:05:58.714 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.714917 | orchestrator | 22:05:58.714 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-01 22:05:58.714948 | orchestrator | 22:05:58.714 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.714981 | orchestrator | 22:05:58.714 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.714993 | orchestrator | 22:05:58.714 STDOUT terraform:  } 2025-06-01 22:05:58.715042 | orchestrator | 22:05:58.714 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-01 22:05:58.715120 | orchestrator | 22:05:58.715 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-01 22:05:58.715129 | orchestrator | 22:05:58.715 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.715148 | orchestrator | 22:05:58.715 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.715180 | orchestrator | 22:05:58.715 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.715200 | orchestrator | 22:05:58.715 STDOUT terraform:  + protocol = "udp" 2025-06-01 22:05:58.715232 | orchestrator | 22:05:58.715 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.715262 | orchestrator | 22:05:58.715 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.715291 | orchestrator | 22:05:58.715 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-01 22:05:58.715322 | orchestrator | 22:05:58.715 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.715353 | orchestrator | 22:05:58.715 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.715367 | orchestrator | 22:05:58.715 STDOUT terraform:  } 2025-06-01 22:05:58.715421 | orchestrator | 22:05:58.715 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-01 22:05:58.715472 | orchestrator | 22:05:58.715 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-01 22:05:58.715497 | orchestrator | 22:05:58.715 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.715517 | orchestrator | 22:05:58.715 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.715551 | orchestrator | 22:05:58.715 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.715557 | orchestrator | 22:05:58.715 STDOUT terraform:  + prot 2025-06-01 22:05:58.715631 | orchestrator | 22:05:58.715 STDOUT terraform: ocol = "icmp" 2025-06-01 22:05:58.715661 | orchestrator | 22:05:58.715 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.715725 | orchestrator | 22:05:58.715 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.715750 | orchestrator | 22:05:58.715 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.715781 | orchestrator | 22:05:58.715 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.715813 | orchestrator | 22:05:58.715 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.715822 | orchestrator | 22:05:58.715 STDOUT terraform:  } 2025-06-01 22:05:58.715879 | orchestrator | 22:05:58.715 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-01 22:05:58.715931 | orchestrator | 22:05:58.715 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-01 22:05:58.715955 | orchestrator | 22:05:58.715 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.715984 | orchestrator | 22:05:58.715 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.716007 | orchestrator | 22:05:58.715 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.716033 | orchestrator | 22:05:58.716 STDOUT terraform:  + protocol = "tcp" 2025-06-01 22:05:58.716060 | orchestrator | 22:05:58.716 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.716101 | orchestrator | 22:05:58.716 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.716125 | orchestrator | 22:05:58.716 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.716155 | orchestrator | 22:05:58.716 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.716187 | orchestrator | 22:05:58.716 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.716201 | orchestrator | 22:05:58.716 STDOUT terraform:  } 2025-06-01 22:05:58.716253 | orchestrator | 22:05:58.716 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-01 22:05:58.716304 | orchestrator | 22:05:58.716 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-01 22:05:58.716328 | orchestrator | 22:05:58.716 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.716348 | orchestrator | 22:05:58.716 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.716380 | orchestrator | 22:05:58.716 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.716402 | orchestrator | 22:05:58.716 STDOUT terraform:  + protocol = "udp" 2025-06-01 22:05:58.716432 | orchestrator | 22:05:58.716 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.716463 | orchestrator | 22:05:58.716 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.716486 | orchestrator | 22:05:58.716 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.716519 | orchestrator | 22:05:58.716 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.716548 | orchestrator | 22:05:58.716 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.716563 | orchestrator | 22:05:58.716 STDOUT terraform:  } 2025-06-01 22:05:58.716613 | orchestrator | 22:05:58.716 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-01 22:05:58.716664 | orchestrator | 22:05:58.716 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-01 22:05:58.716695 | orchestrator | 22:05:58.716 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.716726 | orchestrator | 22:05:58.716 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.716768 | orchestrator | 22:05:58.716 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.716805 | orchestrator | 22:05:58.716 STDOUT terraform:  + protocol = "icmp" 2025-06-01 22:05:58.716846 | orchestrator | 22:05:58.716 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.716878 | orchestrator | 22:05:58.716 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.716903 | orchestrator | 22:05:58.716 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.716935 | orchestrator | 22:05:58.716 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.716966 | orchestrator | 22:05:58.716 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.716980 | orchestrator | 22:05:58.716 STDOUT terraform:  } 2025-06-01 22:05:58.717032 | orchestrator | 22:05:58.716 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-01 22:05:58.717081 | orchestrator | 22:05:58.717 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-01 22:05:58.717123 | orchestrator | 22:05:58.717 STDOUT terraform:  + description = "vrrp" 2025-06-01 22:05:58.717147 | orchestrator | 22:05:58.717 STDOUT terraform:  + direction = "ingress" 2025-06-01 22:05:58.717168 | orchestrator | 22:05:58.717 STDOUT terraform:  + ethertype = "IPv4" 2025-06-01 22:05:58.717201 | orchestrator | 22:05:58.717 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.717226 | orchestrator | 22:05:58.717 STDOUT terraform:  + protocol = "112" 2025-06-01 22:05:58.717253 | orchestrator | 22:05:58.717 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.717283 | orchestrator | 22:05:58.717 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-01 22:05:58.717307 | orchestrator | 22:05:58.717 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-01 22:05:58.717337 | orchestrator | 22:05:58.717 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-01 22:05:58.717368 | orchestrator | 22:05:58.717 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.717382 | orchestrator | 22:05:58.717 STDOUT terraform:  } 2025-06-01 22:05:58.717434 | orchestrator | 22:05:58.717 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-01 22:05:58.717483 | orchestrator | 22:05:58.717 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-01 22:05:58.717511 | orchestrator | 22:05:58.717 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.717546 | orchestrator | 22:05:58.717 STDOUT terraform:  + description = "management security group" 2025-06-01 22:05:58.717574 | orchestrator | 22:05:58.717 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.717605 | orchestrator | 22:05:58.717 STDOUT terraform:  + name = "testbed-management" 2025-06-01 22:05:58.717633 | orchestrator | 22:05:58.717 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.717661 | orchestrator | 22:05:58.717 STDOUT terraform:  + stateful = (known after apply) 2025-06-01 22:05:58.717690 | orchestrator | 22:05:58.717 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.717697 | orchestrator | 22:05:58.717 STDOUT terraform:  } 2025-06-01 22:05:58.717746 | orchestrator | 22:05:58.717 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-01 22:05:58.717792 | orchestrator | 22:05:58.717 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-01 22:05:58.717820 | orchestrator | 22:05:58.717 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.717848 | orchestrator | 22:05:58.717 STDOUT terraform:  + description = "node security group" 2025-06-01 22:05:58.717877 | orchestrator | 22:05:58.717 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.717901 | orchestrator | 22:05:58.717 STDOUT terraform:  + name = "testbed-node" 2025-06-01 22:05:58.717929 | orchestrator | 22:05:58.717 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.717958 | orchestrator | 22:05:58.717 STDOUT terraform:  + stateful = (known after apply) 2025-06-01 22:05:58.717987 | orchestrator | 22:05:58.717 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.718000 | orchestrator | 22:05:58.717 STDOUT terraform:  } 2025-06-01 22:05:58.718061 | orchestrator | 22:05:58.717 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-01 22:05:58.718115 | orchestrator | 22:05:58.718 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-01 22:05:58.718145 | orchestrator | 22:05:58.718 STDOUT terraform:  + all_tags = (known after apply) 2025-06-01 22:05:58.718175 | orchestrator | 22:05:58.718 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-01 22:05:58.718197 | orchestrator | 22:05:58.718 STDOUT terraform:  + dns_nameservers = [ 2025-06-01 22:05:58.718214 | orchestrator | 22:05:58.718 STDOUT terraform:  + "8.8.8.8", 2025-06-01 22:05:58.718232 | orchestrator | 22:05:58.718 STDOUT terraform:  + "9.9.9.9", 2025-06-01 22:05:58.718247 | orchestrator | 22:05:58.718 STDOUT terraform:  ] 2025-06-01 22:05:58.718271 | orchestrator | 22:05:58.718 STDOUT terraform:  + enable_dhcp = true 2025-06-01 22:05:58.718302 | orchestrator | 22:05:58.718 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-01 22:05:58.718334 | orchestrator | 22:05:58.718 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.718355 | orchestrator | 22:05:58.718 STDOUT terraform:  + ip_version = 4 2025-06-01 22:05:58.718385 | orchestrator | 22:05:58.718 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-01 22:05:58.718419 | orchestrator | 22:05:58.718 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-01 22:05:58.718456 | orchestrator | 22:05:58.718 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-01 22:05:58.718486 | orchestrator | 22:05:58.718 STDOUT terraform:  + network_id = (known after apply) 2025-06-01 22:05:58.718508 | orchestrator | 22:05:58.718 STDOUT terraform:  + no_gateway = false 2025-06-01 22:05:58.718538 | orchestrator | 22:05:58.718 STDOUT terraform:  + region = (known after apply) 2025-06-01 22:05:58.718568 | orchestrator | 22:05:58.718 STDOUT terraform:  + service_types = (known after apply) 2025-06-01 22:05:58.718597 | orchestrator | 22:05:58.718 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-01 22:05:58.718617 | orchestrator | 22:05:58.718 STDOUT terraform:  + allocation_pool { 2025-06-01 22:05:58.718641 | orchestrator | 22:05:58.718 STDOUT terraform:  + end = "192.168.31.250" 2025-06-01 22:05:58.718665 | orchestrator | 22:05:58.718 STDOUT terraform:  + start = "192.168.31.200" 2025-06-01 22:05:58.718681 | orchestrator | 22:05:58.718 STDOUT terraform:  } 2025-06-01 22:05:58.718687 | orchestrator | 22:05:58.718 STDOUT terraform:  } 2025-06-01 22:05:58.718715 | orchestrator | 22:05:58.718 STDOUT terraform:  # terraform_data.image will be created 2025-06-01 22:05:58.718772 | orchestrator | 22:05:58.718 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-01 22:05:58.718779 | orchestrator | 22:05:58.718 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.718783 | orchestrator | 22:05:58.718 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-01 22:05:58.718803 | orchestrator | 22:05:58.718 STDOUT terraform:  + output = (known after apply) 2025-06-01 22:05:58.718817 | orchestrator | 22:05:58.718 STDOUT terraform:  } 2025-06-01 22:05:58.718845 | orchestrator | 22:05:58.718 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-01 22:05:58.718876 | orchestrator | 22:05:58.718 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-01 22:05:58.718901 | orchestrator | 22:05:58.718 STDOUT terraform:  + id = (known after apply) 2025-06-01 22:05:58.718922 | orchestrator | 22:05:58.718 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-01 22:05:58.718946 | orchestrator | 22:05:58.718 STDOUT terraform:  + output = (known after apply) 2025-06-01 22:05:58.718961 | orchestrator | 22:05:58.718 STDOUT terraform:  } 2025-06-01 22:05:58.718991 | orchestrator | 22:05:58.718 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-01 22:05:58.719005 | orchestrator | 22:05:58.718 STDOUT terraform: Changes to Outputs: 2025-06-01 22:05:58.719030 | orchestrator | 22:05:58.719 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-01 22:05:58.719057 | orchestrator | 22:05:58.719 STDOUT terraform:  + private_key = (sensitive value) 2025-06-01 22:05:58.940841 | orchestrator | 22:05:58.940 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-01 22:05:58.941104 | orchestrator | 22:05:58.940 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=8def971d-be0f-41d1-4a7a-68049a531bb0] 2025-06-01 22:05:58.941843 | orchestrator | 22:05:58.941 STDOUT terraform: terraform_data.image: Creating... 2025-06-01 22:05:58.942438 | orchestrator | 22:05:58.942 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=e3988e42-5625-371a-fd4f-d335c62e5a30] 2025-06-01 22:05:58.964845 | orchestrator | 22:05:58.961 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-01 22:05:58.964916 | orchestrator | 22:05:58.961 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-01 22:05:58.974228 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-01 22:05:58.974270 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-01 22:05:58.974305 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-01 22:05:58.974359 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-01 22:05:58.974398 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-01 22:05:58.974444 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-01 22:05:58.974487 | orchestrator | 22:05:58.974 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-01 22:05:58.980433 | orchestrator | 22:05:58.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-01 22:05:59.465911 | orchestrator | 22:05:59.463 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-01 22:05:59.477143 | orchestrator | 22:05:59.475 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-01 22:05:59.489375 | orchestrator | 22:05:59.489 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-01 22:05:59.497197 | orchestrator | 22:05:59.496 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-01 22:06:04.990837 | orchestrator | 22:06:04.990 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=c386df10-098a-4adb-98ea-a9f2dd09e72b] 2025-06-01 22:06:04.995684 | orchestrator | 22:06:04.995 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-01 22:06:05.111620 | orchestrator | 22:06:05.110 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-01 22:06:05.119862 | orchestrator | 22:06:05.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-01 22:06:08.970737 | orchestrator | 22:06:08.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-01 22:06:08.970845 | orchestrator | 22:06:08.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-01 22:06:08.970992 | orchestrator | 22:06:08.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-01 22:06:08.971277 | orchestrator | 22:06:08.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-01 22:06:08.971607 | orchestrator | 22:06:08.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-01 22:06:08.971891 | orchestrator | 22:06:08.971 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-01 22:06:08.981809 | orchestrator | 22:06:08.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-01 22:06:09.476998 | orchestrator | 22:06:09.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-01 22:06:09.498544 | orchestrator | 22:06:09.498 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-01 22:06:09.598803 | orchestrator | 22:06:09.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=9eb75d32-600b-4da1-bdd4-064d087d06d5] 2025-06-01 22:06:09.601251 | orchestrator | 22:06:09.601 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=39b25e00-2509-407e-b71e-c183a8ac9680] 2025-06-01 22:06:09.611413 | orchestrator | 22:06:09.611 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-01 22:06:09.613493 | orchestrator | 22:06:09.613 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-01 22:06:09.622221 | orchestrator | 22:06:09.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=f2cefa5c-3d1d-4277-b121-6d9adea683a7] 2025-06-01 22:06:09.631705 | orchestrator | 22:06:09.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-01 22:06:09.643598 | orchestrator | 22:06:09.643 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=e23ad96a-b832-416d-911f-1711f12500c4] 2025-06-01 22:06:09.654937 | orchestrator | 22:06:09.654 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=768ce349-132d-4c04-96b3-035bfe10ebf6] 2025-06-01 22:06:09.662591 | orchestrator | 22:06:09.660 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-01 22:06:09.662638 | orchestrator | 22:06:09.660 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=a8e8789d-2f8d-4752-a1c5-15f6e96bd27f] 2025-06-01 22:06:09.667839 | orchestrator | 22:06:09.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-01 22:06:09.667877 | orchestrator | 22:06:09.666 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=9f9b614f-8ac1-443f-a8a9-e3e743fec9fb] 2025-06-01 22:06:09.668382 | orchestrator | 22:06:09.668 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-01 22:06:09.679861 | orchestrator | 22:06:09.679 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-01 22:06:09.718005 | orchestrator | 22:06:09.716 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=b890f567-0ad2-40b6-bedf-e62e59fc0322] 2025-06-01 22:06:09.730516 | orchestrator | 22:06:09.730 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-01 22:06:09.733812 | orchestrator | 22:06:09.733 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=389c9d93-9871-4a47-9a60-ac279d750f3d] 2025-06-01 22:06:09.735529 | orchestrator | 22:06:09.735 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=19912ab6c0c72f35bbaf2c44a67075a435e17ccd] 2025-06-01 22:06:09.743798 | orchestrator | 22:06:09.743 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-01 22:06:09.754181 | orchestrator | 22:06:09.754 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=fff2124eea0ee58560321921db4437860ef865a0] 2025-06-01 22:06:15.121392 | orchestrator | 22:06:15.121 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-01 22:06:15.432808 | orchestrator | 22:06:15.432 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=b16f75d9-1f35-401a-92f6-79076ad325ac] 2025-06-01 22:06:15.558595 | orchestrator | 22:06:15.558 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=c2bf0fd8-678a-467c-b582-db5d3cdfa122] 2025-06-01 22:06:15.567183 | orchestrator | 22:06:15.566 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-01 22:06:19.612217 | orchestrator | 22:06:19.611 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-01 22:06:19.614335 | orchestrator | 22:06:19.613 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-01 22:06:19.633618 | orchestrator | 22:06:19.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-01 22:06:19.662212 | orchestrator | 22:06:19.661 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-01 22:06:19.667253 | orchestrator | 22:06:19.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-01 22:06:19.670479 | orchestrator | 22:06:19.670 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-01 22:06:19.979143 | orchestrator | 22:06:19.978 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=189a8ba1-bc59-4831-afbf-98fe97dbcace] 2025-06-01 22:06:19.985596 | orchestrator | 22:06:19.985 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=9eb8197d-3dc8-4459-9c52-34779715aaef] 2025-06-01 22:06:20.002964 | orchestrator | 22:06:20.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=f3aa2206-ef16-41e3-9f26-c7be1b94f31f] 2025-06-01 22:06:20.025084 | orchestrator | 22:06:20.024 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=82c7df1e-32ed-4306-ad14-c7acdab76517] 2025-06-01 22:06:20.044693 | orchestrator | 22:06:20.044 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=ee682ed7-cf61-4b4b-b7dc-0c09473318ce] 2025-06-01 22:06:20.053365 | orchestrator | 22:06:20.053 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=0e2ac0fd-8533-4907-81a1-045b6df94c33] 2025-06-01 22:06:22.943693 | orchestrator | 22:06:22.943 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=13f85c2d-4898-4482-8d8e-3f6b8cc23053] 2025-06-01 22:06:22.949117 | orchestrator | 22:06:22.948 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-01 22:06:22.950861 | orchestrator | 22:06:22.949 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-01 22:06:22.951390 | orchestrator | 22:06:22.951 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-01 22:06:23.132224 | orchestrator | 22:06:23.131 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=43345820-88d3-4ad1-8653-c72fdce414c2] 2025-06-01 22:06:23.146555 | orchestrator | 22:06:23.146 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-01 22:06:23.147102 | orchestrator | 22:06:23.146 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-01 22:06:23.147960 | orchestrator | 22:06:23.147 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=869ce8cc-2804-4b5c-9722-d9a4913c97ac] 2025-06-01 22:06:23.151978 | orchestrator | 22:06:23.151 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-01 22:06:23.152569 | orchestrator | 22:06:23.152 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-01 22:06:23.155371 | orchestrator | 22:06:23.155 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-01 22:06:23.155854 | orchestrator | 22:06:23.155 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-01 22:06:23.157587 | orchestrator | 22:06:23.157 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-01 22:06:23.158251 | orchestrator | 22:06:23.158 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-01 22:06:23.159564 | orchestrator | 22:06:23.159 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-01 22:06:23.298747 | orchestrator | 22:06:23.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=f870ea26-4acf-4464-bd59-5eca4a7cde88] 2025-06-01 22:06:23.304934 | orchestrator | 22:06:23.304 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-01 22:06:23.358733 | orchestrator | 22:06:23.358 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=17d26f9e-63aa-42b5-b293-6f4545e3eac7] 2025-06-01 22:06:23.373680 | orchestrator | 22:06:23.373 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-01 22:06:23.426430 | orchestrator | 22:06:23.426 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=61b436b8-3a8d-42c3-96ba-10a40de08330] 2025-06-01 22:06:23.442897 | orchestrator | 22:06:23.442 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-01 22:06:23.510267 | orchestrator | 22:06:23.509 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=65139c32-96a9-4cc6-a284-fe7482985332] 2025-06-01 22:06:23.525623 | orchestrator | 22:06:23.525 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-01 22:06:23.608467 | orchestrator | 22:06:23.607 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=bff3dcdb-f556-45cf-83e9-a671037a0118] 2025-06-01 22:06:23.623173 | orchestrator | 22:06:23.622 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-01 22:06:23.661918 | orchestrator | 22:06:23.661 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=128812f2-6bb7-42e7-9147-2faf9813aa16] 2025-06-01 22:06:23.679334 | orchestrator | 22:06:23.679 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-01 22:06:23.802484 | orchestrator | 22:06:23.801 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e1fc1eb3-73d1-432f-8471-d12cc57c89d2] 2025-06-01 22:06:23.808593 | orchestrator | 22:06:23.808 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=3c9d29b4-8bb0-4017-8688-487075ec05ff] 2025-06-01 22:06:23.819531 | orchestrator | 22:06:23.819 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-01 22:06:24.140825 | orchestrator | 22:06:24.140 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=4a9e66cc-c326-42b5-b979-210cd412bc2e] 2025-06-01 22:06:28.868292 | orchestrator | 22:06:28.867 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=ab667968-e5f0-4289-9eb6-d18c95afb0a1] 2025-06-01 22:06:29.277291 | orchestrator | 22:06:29.276 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=4b170823-6d6f-4fd3-a8fd-6d2f8d9003dd] 2025-06-01 22:06:29.335860 | orchestrator | 22:06:29.335 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=0eb37921-f7ea-44cd-8639-c1e87823bc32] 2025-06-01 22:06:29.390794 | orchestrator | 22:06:29.390 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=62b67b34-c2c4-4ae8-9c52-cdd0219fd9bc] 2025-06-01 22:06:29.423898 | orchestrator | 22:06:29.423 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=b1a38b67-b07f-4b45-9ff6-de2b62bd0d13] 2025-06-01 22:06:29.755944 | orchestrator | 22:06:29.755 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=1a3da06a-9ecb-4999-bf0b-767a5868df84] 2025-06-01 22:06:30.291292 | orchestrator | 22:06:30.290 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=691d71bd-581a-49a4-aad1-d8a3907601bf] 2025-06-01 22:06:30.461364 | orchestrator | 22:06:30.460 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=9179a7eb-bf9a-4932-b887-0aabb92961de] 2025-06-01 22:06:30.482346 | orchestrator | 22:06:30.482 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-01 22:06:30.495657 | orchestrator | 22:06:30.495 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-01 22:06:30.495946 | orchestrator | 22:06:30.495 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-01 22:06:30.498103 | orchestrator | 22:06:30.497 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-01 22:06:30.511877 | orchestrator | 22:06:30.511 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-01 22:06:30.512896 | orchestrator | 22:06:30.512 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-01 22:06:30.516352 | orchestrator | 22:06:30.516 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-01 22:06:36.971986 | orchestrator | 22:06:36.971 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=fb22c534-25d7-4f19-9a50-442aa6012a56] 2025-06-01 22:06:36.986798 | orchestrator | 22:06:36.986 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-01 22:06:36.989962 | orchestrator | 22:06:36.989 STDOUT terraform: local_file.inventory: Creating... 2025-06-01 22:06:36.990186 | orchestrator | 22:06:36.990 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-01 22:06:36.996947 | orchestrator | 22:06:36.996 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=b93f14315cca025c8fd6ae3aaba0c4d0acdc46ff] 2025-06-01 22:06:36.997624 | orchestrator | 22:06:36.997 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=79c62158a7cf0d526877ca719152983ac6ac67ad] 2025-06-01 22:06:38.019875 | orchestrator | 22:06:38.019 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=fb22c534-25d7-4f19-9a50-442aa6012a56] 2025-06-01 22:06:40.499995 | orchestrator | 22:06:40.499 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-01 22:06:40.500824 | orchestrator | 22:06:40.500 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [11s elapsed] 2025-06-01 22:06:40.500942 | orchestrator | 22:06:40.500 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [11s elapsed] 2025-06-01 22:06:40.513223 | orchestrator | 22:06:40.512 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-01 22:06:40.516441 | orchestrator | 22:06:40.516 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-01 22:06:40.517584 | orchestrator | 22:06:40.517 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-01 22:06:50.500618 | orchestrator | 22:06:50.500 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [21s elapsed] 2025-06-01 22:06:50.501652 | orchestrator | 22:06:50.501 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [21s elapsed] 2025-06-01 22:06:50.501688 | orchestrator | 22:06:50.501 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [21s elapsed] 2025-06-01 22:06:50.514243 | orchestrator | 22:06:50.513 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-01 22:06:50.517373 | orchestrator | 22:06:50.517 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-01 22:06:50.518454 | orchestrator | 22:06:50.518 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-01 22:06:50.859740 | orchestrator | 22:06:50.859 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=9020120f-93ea-4e1c-8c4c-b6056e2d1fee] 2025-06-01 22:06:51.045936 | orchestrator | 22:06:51.045 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=7a58a803-7454-4553-8c57-d643db54302e] 2025-06-01 22:06:51.143153 | orchestrator | 22:06:51.142 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=88ca8ee8-3777-4913-8710-179ceb71e278] 2025-06-01 22:07:00.501274 | orchestrator | 22:07:00.500 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [31s elapsed] 2025-06-01 22:07:00.518760 | orchestrator | 22:07:00.518 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-01 22:07:00.519567 | orchestrator | 22:07:00.519 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-01 22:07:01.141843 | orchestrator | 22:07:01.141 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=cfff7dbc-5055-418f-8831-205b05d3170b] 2025-06-01 22:07:01.147138 | orchestrator | 22:07:01.146 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=afcb2920-9576-4081-8c98-f7c1f81e8d06] 2025-06-01 22:07:01.241972 | orchestrator | 22:07:01.241 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=a5fd3d65-8516-4b46-9ec3-03644b593f7a] 2025-06-01 22:07:01.259264 | orchestrator | 22:07:01.258 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-01 22:07:01.265866 | orchestrator | 22:07:01.265 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-01 22:07:01.268551 | orchestrator | 22:07:01.268 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-01 22:07:01.274240 | orchestrator | 22:07:01.273 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-01 22:07:01.275636 | orchestrator | 22:07:01.275 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1718660877140697293] 2025-06-01 22:07:01.278967 | orchestrator | 22:07:01.278 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-01 22:07:01.281655 | orchestrator | 22:07:01.281 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-01 22:07:01.302354 | orchestrator | 22:07:01.302 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-01 22:07:01.303137 | orchestrator | 22:07:01.302 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-01 22:07:01.305851 | orchestrator | 22:07:01.305 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-01 22:07:01.309899 | orchestrator | 22:07:01.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-01 22:07:01.313489 | orchestrator | 22:07:01.313 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-01 22:07:07.038172 | orchestrator | 22:07:07.037 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=88ca8ee8-3777-4913-8710-179ceb71e278/f2cefa5c-3d1d-4277-b121-6d9adea683a7] 2025-06-01 22:07:07.052654 | orchestrator | 22:07:07.052 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=7a58a803-7454-4553-8c57-d643db54302e/389c9d93-9871-4a47-9a60-ac279d750f3d] 2025-06-01 22:07:07.072956 | orchestrator | 22:07:07.072 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=88ca8ee8-3777-4913-8710-179ceb71e278/768ce349-132d-4c04-96b3-035bfe10ebf6] 2025-06-01 22:07:07.087208 | orchestrator | 22:07:07.082 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=9020120f-93ea-4e1c-8c4c-b6056e2d1fee/a8e8789d-2f8d-4752-a1c5-15f6e96bd27f] 2025-06-01 22:07:07.098617 | orchestrator | 22:07:07.098 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=7a58a803-7454-4553-8c57-d643db54302e/39b25e00-2509-407e-b71e-c183a8ac9680] 2025-06-01 22:07:07.128255 | orchestrator | 22:07:07.127 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=88ca8ee8-3777-4913-8710-179ceb71e278/e23ad96a-b832-416d-911f-1711f12500c4] 2025-06-01 22:07:07.131338 | orchestrator | 22:07:07.130 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=9020120f-93ea-4e1c-8c4c-b6056e2d1fee/9eb75d32-600b-4da1-bdd4-064d087d06d5] 2025-06-01 22:07:07.151598 | orchestrator | 22:07:07.151 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=7a58a803-7454-4553-8c57-d643db54302e/9f9b614f-8ac1-443f-a8a9-e3e743fec9fb] 2025-06-01 22:07:07.161440 | orchestrator | 22:07:07.161 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=9020120f-93ea-4e1c-8c4c-b6056e2d1fee/b890f567-0ad2-40b6-bedf-e62e59fc0322] 2025-06-01 22:07:11.314882 | orchestrator | 22:07:11.314 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-01 22:07:21.315068 | orchestrator | 22:07:21.314 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-01 22:07:21.646341 | orchestrator | 22:07:21.645 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=fa0a64e8-dc07-4590-8eff-21279373a1d1] 2025-06-01 22:07:21.719174 | orchestrator | 22:07:21.718 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-01 22:07:21.719293 | orchestrator | 22:07:21.719 STDOUT terraform: Outputs: 2025-06-01 22:07:21.719311 | orchestrator | 22:07:21.719 STDOUT terraform: manager_address = 2025-06-01 22:07:21.719324 | orchestrator | 22:07:21.719 STDOUT terraform: private_key = 2025-06-01 22:07:21.889827 | orchestrator | ok: Runtime: 0:01:33.016953 2025-06-01 22:07:21.923817 | 2025-06-01 22:07:21.923947 | TASK [Create infrastructure (stable)] 2025-06-01 22:07:22.457519 | orchestrator | skipping: Conditional result was False 2025-06-01 22:07:22.475801 | 2025-06-01 22:07:22.475984 | TASK [Fetch manager address] 2025-06-01 22:07:23.015411 | orchestrator | ok 2025-06-01 22:07:23.026041 | 2025-06-01 22:07:23.026201 | TASK [Set manager_host address] 2025-06-01 22:07:23.107091 | orchestrator | ok 2025-06-01 22:07:23.118309 | 2025-06-01 22:07:23.118448 | LOOP [Update ansible collections] 2025-06-01 22:07:25.409574 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-01 22:07:25.409972 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:07:25.410035 | orchestrator | Starting galaxy collection install process 2025-06-01 22:07:25.410078 | orchestrator | Process install dependency map 2025-06-01 22:07:25.410117 | orchestrator | Starting collection install process 2025-06-01 22:07:25.410153 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-06-01 22:07:25.410197 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-06-01 22:07:25.410241 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-01 22:07:25.410323 | orchestrator | ok: Item: commons Runtime: 0:00:01.971390 2025-06-01 22:07:26.275976 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:07:26.276196 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-01 22:07:26.276276 | orchestrator | Starting galaxy collection install process 2025-06-01 22:07:26.276338 | orchestrator | Process install dependency map 2025-06-01 22:07:26.276396 | orchestrator | Starting collection install process 2025-06-01 22:07:26.276447 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-06-01 22:07:26.276497 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-06-01 22:07:26.276546 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-01 22:07:26.276689 | orchestrator | ok: Item: services Runtime: 0:00:00.594615 2025-06-01 22:07:26.296129 | 2025-06-01 22:07:26.296292 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-01 22:07:36.875050 | orchestrator | ok 2025-06-01 22:07:36.883883 | 2025-06-01 22:07:36.884016 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-01 22:08:36.923056 | orchestrator | ok 2025-06-01 22:08:36.932665 | 2025-06-01 22:08:36.932808 | TASK [Fetch manager ssh hostkey] 2025-06-01 22:08:38.511983 | orchestrator | Output suppressed because no_log was given 2025-06-01 22:08:38.530304 | 2025-06-01 22:08:38.530482 | TASK [Get ssh keypair from terraform environment] 2025-06-01 22:08:39.078185 | orchestrator | ok: Runtime: 0:00:00.009767 2025-06-01 22:08:39.093090 | 2025-06-01 22:08:39.093248 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-01 22:08:39.132416 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-01 22:08:39.142309 | 2025-06-01 22:08:39.142487 | TASK [Run manager part 0] 2025-06-01 22:08:39.996414 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:08:40.042060 | orchestrator | 2025-06-01 22:08:40.042109 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-01 22:08:40.042117 | orchestrator | 2025-06-01 22:08:40.042131 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-01 22:08:42.030409 | orchestrator | ok: [testbed-manager] 2025-06-01 22:08:42.030462 | orchestrator | 2025-06-01 22:08:42.030485 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-01 22:08:42.030497 | orchestrator | 2025-06-01 22:08:42.030507 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:08:44.099442 | orchestrator | ok: [testbed-manager] 2025-06-01 22:08:44.099624 | orchestrator | 2025-06-01 22:08:44.099668 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-01 22:08:44.869631 | orchestrator | ok: [testbed-manager] 2025-06-01 22:08:44.869739 | orchestrator | 2025-06-01 22:08:44.869758 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-01 22:08:44.930794 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:44.930893 | orchestrator | 2025-06-01 22:08:44.930917 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-01 22:08:44.968363 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:44.968415 | orchestrator | 2025-06-01 22:08:44.968429 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-01 22:08:45.000866 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:45.000924 | orchestrator | 2025-06-01 22:08:45.000932 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-01 22:08:45.032669 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:45.032710 | orchestrator | 2025-06-01 22:08:45.032715 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-01 22:08:45.057715 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:45.057799 | orchestrator | 2025-06-01 22:08:45.057818 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-01 22:08:45.090637 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:45.090672 | orchestrator | 2025-06-01 22:08:45.090679 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-01 22:08:45.125819 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:08:45.125863 | orchestrator | 2025-06-01 22:08:45.125872 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-01 22:08:45.890453 | orchestrator | changed: [testbed-manager] 2025-06-01 22:08:45.890537 | orchestrator | 2025-06-01 22:08:45.890552 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-01 22:12:04.812636 | orchestrator | changed: [testbed-manager] 2025-06-01 22:12:04.816373 | orchestrator | 2025-06-01 22:12:04.816399 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-01 22:13:21.543687 | orchestrator | changed: [testbed-manager] 2025-06-01 22:13:21.543730 | orchestrator | 2025-06-01 22:13:21.543737 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-01 22:13:44.907813 | orchestrator | changed: [testbed-manager] 2025-06-01 22:13:44.907914 | orchestrator | 2025-06-01 22:13:44.907933 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-01 22:13:54.311029 | orchestrator | changed: [testbed-manager] 2025-06-01 22:13:54.311144 | orchestrator | 2025-06-01 22:13:54.311159 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-01 22:13:54.360457 | orchestrator | ok: [testbed-manager] 2025-06-01 22:13:54.360533 | orchestrator | 2025-06-01 22:13:54.360547 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-01 22:13:55.188837 | orchestrator | ok: [testbed-manager] 2025-06-01 22:13:55.188923 | orchestrator | 2025-06-01 22:13:55.188940 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-01 22:13:55.974558 | orchestrator | changed: [testbed-manager] 2025-06-01 22:13:55.974644 | orchestrator | 2025-06-01 22:13:55.974659 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-01 22:14:02.622499 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:02.622592 | orchestrator | 2025-06-01 22:14:02.622638 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-01 22:14:09.106100 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:09.106329 | orchestrator | 2025-06-01 22:14:09.106355 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-01 22:14:11.896298 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:11.896404 | orchestrator | 2025-06-01 22:14:11.896419 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-01 22:14:13.750672 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:13.750756 | orchestrator | 2025-06-01 22:14:13.750771 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-01 22:14:14.923560 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-01 22:14:14.923653 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-01 22:14:14.923668 | orchestrator | 2025-06-01 22:14:14.923681 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-01 22:14:14.968097 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-01 22:14:14.968162 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-01 22:14:14.968173 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-01 22:14:14.968183 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-01 22:14:19.187250 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-01 22:14:19.187363 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-01 22:14:19.187379 | orchestrator | 2025-06-01 22:14:19.187391 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-01 22:14:19.777689 | orchestrator | changed: [testbed-manager] 2025-06-01 22:14:19.777725 | orchestrator | 2025-06-01 22:14:19.777732 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-01 22:14:58.319571 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-01 22:14:58.319640 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-01 22:14:58.319651 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-01 22:14:58.319658 | orchestrator | 2025-06-01 22:14:58.319665 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-01 22:15:00.732805 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-01 22:15:00.732959 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-01 22:15:00.732975 | orchestrator | 2025-06-01 22:15:00.732988 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-01 22:15:00.733000 | orchestrator | 2025-06-01 22:15:00.733011 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:15:02.181483 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:02.181592 | orchestrator | 2025-06-01 22:15:02.181619 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-01 22:15:02.231384 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:02.231472 | orchestrator | 2025-06-01 22:15:02.231489 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-01 22:15:02.296085 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:02.296190 | orchestrator | 2025-06-01 22:15:02.296206 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-01 22:15:03.025327 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:03.025411 | orchestrator | 2025-06-01 22:15:03.025427 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-01 22:15:03.769378 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:03.769522 | orchestrator | 2025-06-01 22:15:03.769546 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-01 22:15:05.177974 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-01 22:15:05.178051 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-01 22:15:05.178091 | orchestrator | 2025-06-01 22:15:05.178109 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-01 22:15:06.568812 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:06.568871 | orchestrator | 2025-06-01 22:15:06.568880 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-01 22:15:08.366293 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:15:08.366387 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-01 22:15:08.366402 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:15:08.366413 | orchestrator | 2025-06-01 22:15:08.366425 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-01 22:15:08.955519 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:08.956142 | orchestrator | 2025-06-01 22:15:08.956173 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-01 22:15:09.025199 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:09.025245 | orchestrator | 2025-06-01 22:15:09.025254 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-01 22:15:09.901547 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:15:09.901617 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:09.901631 | orchestrator | 2025-06-01 22:15:09.901643 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-01 22:15:09.941574 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:09.941638 | orchestrator | 2025-06-01 22:15:09.941656 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-01 22:15:09.977213 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:09.977269 | orchestrator | 2025-06-01 22:15:09.977286 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-01 22:15:10.011466 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:10.011522 | orchestrator | 2025-06-01 22:15:10.011537 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-01 22:15:10.069860 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:10.069923 | orchestrator | 2025-06-01 22:15:10.069939 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-01 22:15:10.833332 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:10.833400 | orchestrator | 2025-06-01 22:15:10.833416 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-01 22:15:10.833429 | orchestrator | 2025-06-01 22:15:10.833442 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:15:12.291833 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:12.291907 | orchestrator | 2025-06-01 22:15:12.291932 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-01 22:15:13.270251 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:13.270333 | orchestrator | 2025-06-01 22:15:13.270348 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:15:13.270361 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-01 22:15:13.270373 | orchestrator | 2025-06-01 22:15:13.431046 | orchestrator | ok: Runtime: 0:06:33.928492 2025-06-01 22:15:13.443183 | 2025-06-01 22:15:13.443314 | TASK [Point out that the log in on the manager is now possible] 2025-06-01 22:15:13.489669 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-01 22:15:13.499292 | 2025-06-01 22:15:13.499467 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-01 22:15:13.529659 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-01 22:15:13.536561 | 2025-06-01 22:15:13.536674 | TASK [Run manager part 1 + 2] 2025-06-01 22:15:14.458551 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-01 22:15:14.525268 | orchestrator | 2025-06-01 22:15:14.525343 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-01 22:15:14.525357 | orchestrator | 2025-06-01 22:15:14.525381 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:15:17.186981 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:17.187029 | orchestrator | 2025-06-01 22:15:17.187052 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-01 22:15:17.232172 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:17.232233 | orchestrator | 2025-06-01 22:15:17.232247 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-01 22:15:17.274653 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:17.274703 | orchestrator | 2025-06-01 22:15:17.274717 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 22:15:17.313617 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:17.313667 | orchestrator | 2025-06-01 22:15:17.313676 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 22:15:17.390843 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:17.390899 | orchestrator | 2025-06-01 22:15:17.390910 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 22:15:17.454249 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:17.454306 | orchestrator | 2025-06-01 22:15:17.454317 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 22:15:17.509807 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-01 22:15:17.509846 | orchestrator | 2025-06-01 22:15:17.509852 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 22:15:18.315711 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:18.315765 | orchestrator | 2025-06-01 22:15:18.315775 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 22:15:18.364521 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:18.364574 | orchestrator | 2025-06-01 22:15:18.364583 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 22:15:19.775938 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:19.775996 | orchestrator | 2025-06-01 22:15:19.776007 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 22:15:20.377271 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:20.377333 | orchestrator | 2025-06-01 22:15:20.377344 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 22:15:21.571943 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:21.571995 | orchestrator | 2025-06-01 22:15:21.572005 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 22:15:35.278216 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:35.278284 | orchestrator | 2025-06-01 22:15:35.278299 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-01 22:15:35.964597 | orchestrator | ok: [testbed-manager] 2025-06-01 22:15:35.964677 | orchestrator | 2025-06-01 22:15:35.964693 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-01 22:15:36.016192 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:36.016269 | orchestrator | 2025-06-01 22:15:36.016285 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-01 22:15:37.044338 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:37.044423 | orchestrator | 2025-06-01 22:15:37.044439 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-01 22:15:38.030427 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:38.030516 | orchestrator | 2025-06-01 22:15:38.030533 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-01 22:15:38.603391 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:38.603475 | orchestrator | 2025-06-01 22:15:38.603490 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-01 22:15:38.645903 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-01 22:15:38.646001 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-01 22:15:38.646049 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-01 22:15:38.646064 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-01 22:15:41.680638 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:41.680810 | orchestrator | 2025-06-01 22:15:41.680825 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-01 22:15:51.341112 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-01 22:15:51.341210 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-01 22:15:51.341228 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-01 22:15:51.341240 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-01 22:15:51.341258 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-01 22:15:51.341270 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-01 22:15:51.341281 | orchestrator | 2025-06-01 22:15:51.341294 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-01 22:15:52.432510 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:52.432650 | orchestrator | 2025-06-01 22:15:52.432666 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-01 22:15:52.476548 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:52.476631 | orchestrator | 2025-06-01 22:15:52.476647 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-01 22:15:55.507138 | orchestrator | changed: [testbed-manager] 2025-06-01 22:15:55.507312 | orchestrator | 2025-06-01 22:15:55.507329 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-01 22:15:55.544797 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:15:55.544877 | orchestrator | 2025-06-01 22:15:55.544894 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-01 22:17:35.092104 | orchestrator | changed: [testbed-manager] 2025-06-01 22:17:35.092219 | orchestrator | 2025-06-01 22:17:35.092241 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 22:17:36.290569 | orchestrator | ok: [testbed-manager] 2025-06-01 22:17:36.290625 | orchestrator | 2025-06-01 22:17:36.290632 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:17:36.290640 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-01 22:17:36.290645 | orchestrator | 2025-06-01 22:17:36.658571 | orchestrator | ok: Runtime: 0:02:22.507234 2025-06-01 22:17:36.680090 | 2025-06-01 22:17:36.680286 | TASK [Reboot manager] 2025-06-01 22:17:38.223150 | orchestrator | ok: Runtime: 0:00:01.063675 2025-06-01 22:17:38.240264 | 2025-06-01 22:17:38.240468 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-01 22:17:54.687694 | orchestrator | ok 2025-06-01 22:17:54.699586 | 2025-06-01 22:17:54.699725 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-01 22:18:54.755859 | orchestrator | ok 2025-06-01 22:18:54.764849 | 2025-06-01 22:18:54.764965 | TASK [Deploy manager + bootstrap nodes] 2025-06-01 22:18:57.503254 | orchestrator | 2025-06-01 22:18:57.503410 | orchestrator | # DEPLOY MANAGER 2025-06-01 22:18:57.503433 | orchestrator | 2025-06-01 22:18:57.503447 | orchestrator | + set -e 2025-06-01 22:18:57.503460 | orchestrator | + echo 2025-06-01 22:18:57.503475 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-01 22:18:57.503491 | orchestrator | + echo 2025-06-01 22:18:57.503540 | orchestrator | + cat /opt/manager-vars.sh 2025-06-01 22:18:57.506800 | orchestrator | export NUMBER_OF_NODES=6 2025-06-01 22:18:57.506830 | orchestrator | 2025-06-01 22:18:57.506843 | orchestrator | export CEPH_VERSION=reef 2025-06-01 22:18:57.506856 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-01 22:18:57.506868 | orchestrator | export MANAGER_VERSION=latest 2025-06-01 22:18:57.506890 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-01 22:18:57.506900 | orchestrator | 2025-06-01 22:18:57.506918 | orchestrator | export ARA=false 2025-06-01 22:18:57.506930 | orchestrator | export DEPLOY_MODE=manager 2025-06-01 22:18:57.506948 | orchestrator | export TEMPEST=false 2025-06-01 22:18:57.506959 | orchestrator | export IS_ZUUL=true 2025-06-01 22:18:57.506970 | orchestrator | 2025-06-01 22:18:57.506988 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:18:57.506999 | orchestrator | export EXTERNAL_API=false 2025-06-01 22:18:57.507010 | orchestrator | 2025-06-01 22:18:57.507020 | orchestrator | export IMAGE_USER=ubuntu 2025-06-01 22:18:57.507035 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-01 22:18:57.507045 | orchestrator | 2025-06-01 22:18:57.507056 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-01 22:18:57.507198 | orchestrator | 2025-06-01 22:18:57.507215 | orchestrator | + echo 2025-06-01 22:18:57.507227 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 22:18:57.508372 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 22:18:57.508409 | orchestrator | ++ INTERACTIVE=false 2025-06-01 22:18:57.508420 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 22:18:57.508432 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 22:18:57.508442 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 22:18:57.508453 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 22:18:57.508463 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 22:18:57.508474 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 22:18:57.508484 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 22:18:57.508495 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 22:18:57.508506 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 22:18:57.508516 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 22:18:57.508527 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 22:18:57.508537 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 22:18:57.508555 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 22:18:57.508566 | orchestrator | ++ export ARA=false 2025-06-01 22:18:57.508577 | orchestrator | ++ ARA=false 2025-06-01 22:18:57.508588 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 22:18:57.508598 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 22:18:57.508608 | orchestrator | ++ export TEMPEST=false 2025-06-01 22:18:57.508619 | orchestrator | ++ TEMPEST=false 2025-06-01 22:18:57.508629 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 22:18:57.508640 | orchestrator | ++ IS_ZUUL=true 2025-06-01 22:18:57.508650 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:18:57.508661 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:18:57.508671 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 22:18:57.508682 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 22:18:57.508692 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 22:18:57.508703 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 22:18:57.508713 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 22:18:57.508724 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 22:18:57.508735 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 22:18:57.508745 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 22:18:57.508756 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-01 22:18:57.570271 | orchestrator | + docker version 2025-06-01 22:18:57.850215 | orchestrator | Client: Docker Engine - Community 2025-06-01 22:18:57.850308 | orchestrator | Version: 27.5.1 2025-06-01 22:18:57.850324 | orchestrator | API version: 1.47 2025-06-01 22:18:57.850335 | orchestrator | Go version: go1.22.11 2025-06-01 22:18:57.850346 | orchestrator | Git commit: 9f9e405 2025-06-01 22:18:57.850357 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-01 22:18:57.850368 | orchestrator | OS/Arch: linux/amd64 2025-06-01 22:18:57.850379 | orchestrator | Context: default 2025-06-01 22:18:57.850390 | orchestrator | 2025-06-01 22:18:57.850401 | orchestrator | Server: Docker Engine - Community 2025-06-01 22:18:57.850412 | orchestrator | Engine: 2025-06-01 22:18:57.850423 | orchestrator | Version: 27.5.1 2025-06-01 22:18:57.850434 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-01 22:18:57.850546 | orchestrator | Go version: go1.22.11 2025-06-01 22:18:57.850561 | orchestrator | Git commit: 4c9b3b0 2025-06-01 22:18:57.850571 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-01 22:18:57.850582 | orchestrator | OS/Arch: linux/amd64 2025-06-01 22:18:57.850592 | orchestrator | Experimental: false 2025-06-01 22:18:57.850603 | orchestrator | containerd: 2025-06-01 22:18:57.850614 | orchestrator | Version: 1.7.27 2025-06-01 22:18:57.850625 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-01 22:18:57.850636 | orchestrator | runc: 2025-06-01 22:18:57.850646 | orchestrator | Version: 1.2.5 2025-06-01 22:18:57.850657 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-01 22:18:57.850668 | orchestrator | docker-init: 2025-06-01 22:18:57.850678 | orchestrator | Version: 0.19.0 2025-06-01 22:18:57.850689 | orchestrator | GitCommit: de40ad0 2025-06-01 22:18:57.854998 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-01 22:18:57.864792 | orchestrator | + set -e 2025-06-01 22:18:57.864847 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 22:18:57.864859 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 22:18:57.864870 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 22:18:57.864880 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 22:18:57.864891 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 22:18:57.864902 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 22:18:57.864913 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 22:18:57.864924 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 22:18:57.864934 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 22:18:57.864945 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 22:18:57.864955 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 22:18:57.864966 | orchestrator | ++ export ARA=false 2025-06-01 22:18:57.864977 | orchestrator | ++ ARA=false 2025-06-01 22:18:57.864988 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 22:18:57.864998 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 22:18:57.865008 | orchestrator | ++ export TEMPEST=false 2025-06-01 22:18:57.865019 | orchestrator | ++ TEMPEST=false 2025-06-01 22:18:57.865029 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 22:18:57.865040 | orchestrator | ++ IS_ZUUL=true 2025-06-01 22:18:57.865050 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:18:57.865061 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:18:57.865078 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 22:18:57.865089 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 22:18:57.865099 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 22:18:57.865110 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 22:18:57.865121 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 22:18:57.865131 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 22:18:57.865142 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 22:18:57.865152 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 22:18:57.865163 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 22:18:57.865212 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 22:18:57.865223 | orchestrator | ++ INTERACTIVE=false 2025-06-01 22:18:57.865233 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 22:18:57.865248 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 22:18:57.865271 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 22:18:57.865282 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 22:18:57.865425 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-01 22:18:57.872393 | orchestrator | + set -e 2025-06-01 22:18:57.872878 | orchestrator | + VERSION=reef 2025-06-01 22:18:57.873550 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:18:57.880682 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-01 22:18:57.880704 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:18:57.886407 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-01 22:18:57.893223 | orchestrator | + set -e 2025-06-01 22:18:57.893246 | orchestrator | + VERSION=2024.2 2025-06-01 22:18:57.894283 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:18:57.896889 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-01 22:18:57.896910 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-01 22:18:57.900922 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-01 22:18:57.901636 | orchestrator | ++ semver latest 7.0.0 2025-06-01 22:18:57.955703 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-01 22:18:57.955773 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 22:18:57.955787 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-01 22:18:57.955798 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-01 22:18:57.992494 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 22:18:57.994896 | orchestrator | + source /opt/venv/bin/activate 2025-06-01 22:18:57.995965 | orchestrator | ++ deactivate nondestructive 2025-06-01 22:18:57.995986 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:18:57.995998 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:18:57.996009 | orchestrator | ++ hash -r 2025-06-01 22:18:57.996020 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:18:57.996030 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-01 22:18:57.996041 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-01 22:18:57.996052 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-01 22:18:57.996070 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-01 22:18:57.996083 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-01 22:18:57.996093 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-01 22:18:57.996104 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-01 22:18:57.996115 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:18:57.996126 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:18:57.996137 | orchestrator | ++ export PATH 2025-06-01 22:18:57.996147 | orchestrator | ++ '[' -n '' ']' 2025-06-01 22:18:57.996158 | orchestrator | ++ '[' -z '' ']' 2025-06-01 22:18:57.996198 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-01 22:18:57.996209 | orchestrator | ++ PS1='(venv) ' 2025-06-01 22:18:57.996220 | orchestrator | ++ export PS1 2025-06-01 22:18:57.996231 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-01 22:18:57.996242 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-01 22:18:57.996257 | orchestrator | ++ hash -r 2025-06-01 22:18:57.996284 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-01 22:18:59.351548 | orchestrator | 2025-06-01 22:18:59.351658 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-01 22:18:59.351675 | orchestrator | 2025-06-01 22:18:59.351687 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 22:18:59.935193 | orchestrator | ok: [testbed-manager] 2025-06-01 22:18:59.935302 | orchestrator | 2025-06-01 22:18:59.935319 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-01 22:19:00.972279 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:00.972382 | orchestrator | 2025-06-01 22:19:00.972399 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-01 22:19:00.972412 | orchestrator | 2025-06-01 22:19:00.972424 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:19:03.544358 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:03.544462 | orchestrator | 2025-06-01 22:19:03.544478 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-01 22:19:03.592673 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:03.592765 | orchestrator | 2025-06-01 22:19:03.592784 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-01 22:19:04.072312 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:04.072407 | orchestrator | 2025-06-01 22:19:04.072421 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-01 22:19:04.122432 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:04.122530 | orchestrator | 2025-06-01 22:19:04.122551 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-01 22:19:04.466862 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:04.466961 | orchestrator | 2025-06-01 22:19:04.466978 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-01 22:19:04.529705 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:04.529773 | orchestrator | 2025-06-01 22:19:04.529787 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-01 22:19:04.871268 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:04.871369 | orchestrator | 2025-06-01 22:19:04.871386 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-01 22:19:04.993387 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:04.993495 | orchestrator | 2025-06-01 22:19:04.993510 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-01 22:19:04.993522 | orchestrator | 2025-06-01 22:19:04.993566 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:19:06.978547 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:06.978646 | orchestrator | 2025-06-01 22:19:06.978671 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-01 22:19:07.090627 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-01 22:19:07.090730 | orchestrator | 2025-06-01 22:19:07.090747 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-01 22:19:07.149207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-01 22:19:07.149272 | orchestrator | 2025-06-01 22:19:07.149287 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-01 22:19:08.329508 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-01 22:19:08.329607 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-01 22:19:08.329623 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-01 22:19:08.329635 | orchestrator | 2025-06-01 22:19:08.329648 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-01 22:19:10.204473 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-01 22:19:10.204596 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-01 22:19:10.204630 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-01 22:19:10.204655 | orchestrator | 2025-06-01 22:19:10.204669 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-01 22:19:10.884346 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:19:10.884439 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:10.884455 | orchestrator | 2025-06-01 22:19:10.884468 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-01 22:19:11.547709 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:19:11.547795 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:11.547809 | orchestrator | 2025-06-01 22:19:11.547821 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-01 22:19:11.610773 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:11.610827 | orchestrator | 2025-06-01 22:19:11.610840 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-01 22:19:11.988390 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:11.988525 | orchestrator | 2025-06-01 22:19:11.988541 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-01 22:19:12.082285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-01 22:19:12.082403 | orchestrator | 2025-06-01 22:19:12.082418 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-01 22:19:13.201504 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:13.201632 | orchestrator | 2025-06-01 22:19:13.201648 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-01 22:19:14.096831 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:14.096981 | orchestrator | 2025-06-01 22:19:14.097008 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-01 22:19:25.860063 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:25.860175 | orchestrator | 2025-06-01 22:19:25.860215 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-01 22:19:25.920354 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:25.920483 | orchestrator | 2025-06-01 22:19:25.920500 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-01 22:19:25.920513 | orchestrator | 2025-06-01 22:19:25.920525 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:19:27.842403 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:27.842536 | orchestrator | 2025-06-01 22:19:27.842588 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-01 22:19:27.975301 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-01 22:19:27.975437 | orchestrator | 2025-06-01 22:19:27.975452 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-01 22:19:28.048375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:19:28.048498 | orchestrator | 2025-06-01 22:19:28.048515 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-01 22:19:30.770871 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:30.771006 | orchestrator | 2025-06-01 22:19:30.771021 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-01 22:19:30.827167 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:30.827249 | orchestrator | 2025-06-01 22:19:30.827265 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-01 22:19:30.968643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-01 22:19:30.968763 | orchestrator | 2025-06-01 22:19:30.968780 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-01 22:19:33.881387 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-01 22:19:33.881510 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-01 22:19:33.881525 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-01 22:19:33.881537 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-01 22:19:33.881548 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-01 22:19:33.881559 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-01 22:19:33.881570 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-01 22:19:33.881581 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-01 22:19:33.881592 | orchestrator | 2025-06-01 22:19:33.881604 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-01 22:19:34.522275 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:34.522369 | orchestrator | 2025-06-01 22:19:34.522413 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-01 22:19:35.186077 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:35.186175 | orchestrator | 2025-06-01 22:19:35.186243 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-01 22:19:35.268753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-01 22:19:35.268831 | orchestrator | 2025-06-01 22:19:35.268844 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-01 22:19:36.535302 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-01 22:19:36.535417 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-01 22:19:36.535433 | orchestrator | 2025-06-01 22:19:36.535446 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-01 22:19:37.199896 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:37.199987 | orchestrator | 2025-06-01 22:19:37.200003 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-01 22:19:37.260720 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:37.260790 | orchestrator | 2025-06-01 22:19:37.260804 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-01 22:19:37.332998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-01 22:19:37.333080 | orchestrator | 2025-06-01 22:19:37.333096 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-01 22:19:38.774511 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:19:38.774602 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:19:38.774616 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:38.774629 | orchestrator | 2025-06-01 22:19:38.774641 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-01 22:19:39.451074 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:39.451167 | orchestrator | 2025-06-01 22:19:39.451183 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-01 22:19:39.502099 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:39.502185 | orchestrator | 2025-06-01 22:19:39.502244 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-01 22:19:39.608847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-01 22:19:39.608932 | orchestrator | 2025-06-01 22:19:39.608946 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-01 22:19:40.166126 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:40.166261 | orchestrator | 2025-06-01 22:19:40.166278 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-01 22:19:40.598282 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:40.598393 | orchestrator | 2025-06-01 22:19:40.598409 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-01 22:19:41.867423 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-01 22:19:41.867547 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-01 22:19:41.867563 | orchestrator | 2025-06-01 22:19:41.867577 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-01 22:19:42.542242 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:42.542338 | orchestrator | 2025-06-01 22:19:42.542353 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-01 22:19:42.971293 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:42.971395 | orchestrator | 2025-06-01 22:19:42.971412 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-01 22:19:43.361532 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:43.361629 | orchestrator | 2025-06-01 22:19:43.361645 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-01 22:19:43.412181 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:43.412299 | orchestrator | 2025-06-01 22:19:43.412316 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-01 22:19:43.479606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-01 22:19:43.479683 | orchestrator | 2025-06-01 22:19:43.479696 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-01 22:19:43.523262 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:43.523343 | orchestrator | 2025-06-01 22:19:43.523358 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-01 22:19:45.596748 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-01 22:19:45.596853 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-01 22:19:45.596869 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-01 22:19:45.596880 | orchestrator | 2025-06-01 22:19:45.596893 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-01 22:19:46.345084 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:46.345156 | orchestrator | 2025-06-01 22:19:46.345163 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-01 22:19:47.113393 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:47.113504 | orchestrator | 2025-06-01 22:19:47.113530 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-01 22:19:47.864124 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:47.864231 | orchestrator | 2025-06-01 22:19:47.864249 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-01 22:19:47.956096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-01 22:19:47.956177 | orchestrator | 2025-06-01 22:19:47.956192 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-01 22:19:48.004679 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:48.004752 | orchestrator | 2025-06-01 22:19:48.004767 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-01 22:19:48.754604 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-01 22:19:48.754696 | orchestrator | 2025-06-01 22:19:48.754712 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-01 22:19:48.844327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-01 22:19:48.844412 | orchestrator | 2025-06-01 22:19:48.844426 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-01 22:19:49.600020 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:49.600128 | orchestrator | 2025-06-01 22:19:49.600136 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-01 22:19:50.223713 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:50.223782 | orchestrator | 2025-06-01 22:19:50.223788 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-01 22:19:50.282721 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:19:50.282750 | orchestrator | 2025-06-01 22:19:50.282755 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-01 22:19:50.338255 | orchestrator | ok: [testbed-manager] 2025-06-01 22:19:50.338316 | orchestrator | 2025-06-01 22:19:50.338321 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-01 22:19:51.198914 | orchestrator | changed: [testbed-manager] 2025-06-01 22:19:51.198987 | orchestrator | 2025-06-01 22:19:51.198993 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-01 22:20:59.107352 | orchestrator | changed: [testbed-manager] 2025-06-01 22:20:59.107477 | orchestrator | 2025-06-01 22:20:59.107494 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-01 22:21:00.111049 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:00.111154 | orchestrator | 2025-06-01 22:21:00.111171 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-01 22:21:00.168663 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:00.168754 | orchestrator | 2025-06-01 22:21:00.168770 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-01 22:21:02.978690 | orchestrator | changed: [testbed-manager] 2025-06-01 22:21:02.978799 | orchestrator | 2025-06-01 22:21:02.978816 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-01 22:21:03.036952 | orchestrator | ok: [testbed-manager] 2025-06-01 22:21:03.037040 | orchestrator | 2025-06-01 22:21:03.037056 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-01 22:21:03.037068 | orchestrator | 2025-06-01 22:21:03.037078 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-01 22:21:03.089542 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:21:03.089618 | orchestrator | 2025-06-01 22:21:03.089631 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-01 22:22:03.139041 | orchestrator | Pausing for 60 seconds 2025-06-01 22:22:03.139098 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:03.139111 | orchestrator | 2025-06-01 22:22:03.139124 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-01 22:22:06.772106 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:06.772234 | orchestrator | 2025-06-01 22:22:06.772252 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-01 22:22:48.578250 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-01 22:22:48.578441 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-01 22:22:48.578461 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:48.578474 | orchestrator | 2025-06-01 22:22:48.578486 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-01 22:22:58.332677 | orchestrator | changed: [testbed-manager] 2025-06-01 22:22:58.332792 | orchestrator | 2025-06-01 22:22:58.332808 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-01 22:22:58.420288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-01 22:22:58.420498 | orchestrator | 2025-06-01 22:22:58.420513 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-01 22:22:58.420525 | orchestrator | 2025-06-01 22:22:58.420536 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-01 22:22:58.478942 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:22:58.479020 | orchestrator | 2025-06-01 22:22:58.479033 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:22:58.479045 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-01 22:22:58.479056 | orchestrator | 2025-06-01 22:22:58.572247 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-01 22:22:58.572380 | orchestrator | + deactivate 2025-06-01 22:22:58.572396 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-01 22:22:58.572410 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-01 22:22:58.572420 | orchestrator | + export PATH 2025-06-01 22:22:58.572431 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-01 22:22:58.572442 | orchestrator | + '[' -n '' ']' 2025-06-01 22:22:58.572453 | orchestrator | + hash -r 2025-06-01 22:22:58.572463 | orchestrator | + '[' -n '' ']' 2025-06-01 22:22:58.572474 | orchestrator | + unset VIRTUAL_ENV 2025-06-01 22:22:58.572484 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-01 22:22:58.572495 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-01 22:22:58.572506 | orchestrator | + unset -f deactivate 2025-06-01 22:22:58.572517 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-01 22:22:58.579923 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 22:22:58.579975 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-01 22:22:58.579987 | orchestrator | + local max_attempts=60 2025-06-01 22:22:58.579999 | orchestrator | + local name=ceph-ansible 2025-06-01 22:22:58.580009 | orchestrator | + local attempt_num=1 2025-06-01 22:22:58.580781 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-01 22:22:58.623510 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:22:58.623573 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-01 22:22:58.623586 | orchestrator | + local max_attempts=60 2025-06-01 22:22:58.623597 | orchestrator | + local name=kolla-ansible 2025-06-01 22:22:58.623608 | orchestrator | + local attempt_num=1 2025-06-01 22:22:58.624908 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-01 22:22:58.654806 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:22:58.654867 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-01 22:22:58.654880 | orchestrator | + local max_attempts=60 2025-06-01 22:22:58.654891 | orchestrator | + local name=osism-ansible 2025-06-01 22:22:58.654902 | orchestrator | + local attempt_num=1 2025-06-01 22:22:58.655598 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-01 22:22:58.687809 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:22:58.687855 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-01 22:22:58.688085 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-01 22:22:59.422411 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-01 22:22:59.641058 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-01 22:22:59.641153 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641166 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641177 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-01 22:22:59.641189 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-01 22:22:59.641220 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641246 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641257 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-06-01 22:22:59.641266 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641276 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-01 22:22:59.641285 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641295 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-01 22:22:59.641344 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641355 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641364 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.641374 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-01 22:22:59.649728 | orchestrator | ++ semver latest 7.0.0 2025-06-01 22:22:59.711365 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-01 22:22:59.711445 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 22:22:59.711458 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-01 22:22:59.714855 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-01 22:23:01.591688 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:23:01.591792 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:23:01.591809 | orchestrator | Registering Redlock._release_script 2025-06-01 22:23:01.788273 | orchestrator | 2025-06-01 22:23:01 | INFO  | Task 3dbb5886-b51b-45bc-b14a-80105a8fc7e6 (resolvconf) was prepared for execution. 2025-06-01 22:23:01.788378 | orchestrator | 2025-06-01 22:23:01 | INFO  | It takes a moment until task 3dbb5886-b51b-45bc-b14a-80105a8fc7e6 (resolvconf) has been started and output is visible here. 2025-06-01 22:23:05.864016 | orchestrator | 2025-06-01 22:23:05.864138 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-01 22:23:05.865283 | orchestrator | 2025-06-01 22:23:05.866916 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:23:05.867189 | orchestrator | Sunday 01 June 2025 22:23:05 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-01 22:23:09.673861 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:09.675112 | orchestrator | 2025-06-01 22:23:09.676529 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-01 22:23:09.677343 | orchestrator | Sunday 01 June 2025 22:23:09 +0000 (0:00:03.812) 0:00:03.969 *********** 2025-06-01 22:23:09.747209 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:23:09.747842 | orchestrator | 2025-06-01 22:23:09.749700 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-01 22:23:09.750127 | orchestrator | Sunday 01 June 2025 22:23:09 +0000 (0:00:00.072) 0:00:04.041 *********** 2025-06-01 22:23:09.853914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-01 22:23:09.854151 | orchestrator | 2025-06-01 22:23:09.855852 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-01 22:23:09.857033 | orchestrator | Sunday 01 June 2025 22:23:09 +0000 (0:00:00.108) 0:00:04.150 *********** 2025-06-01 22:23:09.933220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:23:09.933298 | orchestrator | 2025-06-01 22:23:09.934147 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-01 22:23:09.934220 | orchestrator | Sunday 01 June 2025 22:23:09 +0000 (0:00:00.079) 0:00:04.229 *********** 2025-06-01 22:23:11.061506 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:11.061835 | orchestrator | 2025-06-01 22:23:11.062756 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-01 22:23:11.063126 | orchestrator | Sunday 01 June 2025 22:23:11 +0000 (0:00:01.127) 0:00:05.356 *********** 2025-06-01 22:23:11.122292 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:23:11.122414 | orchestrator | 2025-06-01 22:23:11.122997 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-01 22:23:11.125285 | orchestrator | Sunday 01 June 2025 22:23:11 +0000 (0:00:00.060) 0:00:05.417 *********** 2025-06-01 22:23:11.639092 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:11.639192 | orchestrator | 2025-06-01 22:23:11.639207 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-01 22:23:11.639536 | orchestrator | Sunday 01 June 2025 22:23:11 +0000 (0:00:00.517) 0:00:05.935 *********** 2025-06-01 22:23:11.729208 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:23:11.729483 | orchestrator | 2025-06-01 22:23:11.729687 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-01 22:23:11.729865 | orchestrator | Sunday 01 June 2025 22:23:11 +0000 (0:00:00.088) 0:00:06.023 *********** 2025-06-01 22:23:12.281657 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:12.281755 | orchestrator | 2025-06-01 22:23:12.283389 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-01 22:23:12.283634 | orchestrator | Sunday 01 June 2025 22:23:12 +0000 (0:00:00.553) 0:00:06.577 *********** 2025-06-01 22:23:13.408469 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:13.409652 | orchestrator | 2025-06-01 22:23:13.409684 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-01 22:23:13.410875 | orchestrator | Sunday 01 June 2025 22:23:13 +0000 (0:00:01.124) 0:00:07.701 *********** 2025-06-01 22:23:14.387295 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:14.387560 | orchestrator | 2025-06-01 22:23:14.388831 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-01 22:23:14.389869 | orchestrator | Sunday 01 June 2025 22:23:14 +0000 (0:00:00.980) 0:00:08.681 *********** 2025-06-01 22:23:14.457527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-01 22:23:14.457654 | orchestrator | 2025-06-01 22:23:14.458822 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-01 22:23:14.458975 | orchestrator | Sunday 01 June 2025 22:23:14 +0000 (0:00:00.070) 0:00:08.752 *********** 2025-06-01 22:23:15.689213 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:15.690368 | orchestrator | 2025-06-01 22:23:15.691826 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:23:15.692810 | orchestrator | 2025-06-01 22:23:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:23:15.693110 | orchestrator | 2025-06-01 22:23:15 | INFO  | Please wait and do not abort execution. 2025-06-01 22:23:15.694727 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:23:15.695771 | orchestrator | 2025-06-01 22:23:15.697766 | orchestrator | 2025-06-01 22:23:15.698104 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:23:15.699291 | orchestrator | Sunday 01 June 2025 22:23:15 +0000 (0:00:01.229) 0:00:09.982 *********** 2025-06-01 22:23:15.699622 | orchestrator | =============================================================================== 2025-06-01 22:23:15.700803 | orchestrator | Gathering Facts --------------------------------------------------------- 3.81s 2025-06-01 22:23:15.700829 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.23s 2025-06-01 22:23:15.701652 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-06-01 22:23:15.702100 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2025-06-01 22:23:15.702749 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-06-01 22:23:15.703087 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-06-01 22:23:15.703524 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2025-06-01 22:23:15.704158 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.11s 2025-06-01 22:23:15.705054 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-06-01 22:23:15.705760 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-06-01 22:23:15.706640 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-06-01 22:23:15.707110 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-06-01 22:23:15.707636 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-01 22:23:16.184888 | orchestrator | + osism apply sshconfig 2025-06-01 22:23:17.916192 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:23:17.916293 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:23:17.916307 | orchestrator | Registering Redlock._release_script 2025-06-01 22:23:17.983533 | orchestrator | 2025-06-01 22:23:17 | INFO  | Task 8e7496fe-d9cd-49df-8cbc-49dc46d99a2f (sshconfig) was prepared for execution. 2025-06-01 22:23:17.983596 | orchestrator | 2025-06-01 22:23:17 | INFO  | It takes a moment until task 8e7496fe-d9cd-49df-8cbc-49dc46d99a2f (sshconfig) has been started and output is visible here. 2025-06-01 22:23:22.013254 | orchestrator | 2025-06-01 22:23:22.014959 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-01 22:23:22.015892 | orchestrator | 2025-06-01 22:23:22.017011 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-01 22:23:22.018077 | orchestrator | Sunday 01 June 2025 22:23:22 +0000 (0:00:00.170) 0:00:00.170 *********** 2025-06-01 22:23:22.609885 | orchestrator | ok: [testbed-manager] 2025-06-01 22:23:22.609993 | orchestrator | 2025-06-01 22:23:22.611405 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-01 22:23:22.611435 | orchestrator | Sunday 01 June 2025 22:23:22 +0000 (0:00:00.602) 0:00:00.772 *********** 2025-06-01 22:23:23.136516 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:23.136685 | orchestrator | 2025-06-01 22:23:23.138194 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-01 22:23:23.138601 | orchestrator | Sunday 01 June 2025 22:23:23 +0000 (0:00:00.525) 0:00:01.298 *********** 2025-06-01 22:23:29.153524 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-01 22:23:29.153662 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-01 22:23:29.153863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-01 22:23:29.155691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-01 22:23:29.155725 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-01 22:23:29.155737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-01 22:23:29.156203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-01 22:23:29.156847 | orchestrator | 2025-06-01 22:23:29.157124 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-01 22:23:29.157559 | orchestrator | Sunday 01 June 2025 22:23:29 +0000 (0:00:06.015) 0:00:07.314 *********** 2025-06-01 22:23:29.216190 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:23:29.216860 | orchestrator | 2025-06-01 22:23:29.217533 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-01 22:23:29.218013 | orchestrator | Sunday 01 June 2025 22:23:29 +0000 (0:00:00.065) 0:00:07.379 *********** 2025-06-01 22:23:29.844991 | orchestrator | changed: [testbed-manager] 2025-06-01 22:23:29.846157 | orchestrator | 2025-06-01 22:23:29.847479 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:23:29.847517 | orchestrator | 2025-06-01 22:23:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:23:29.847531 | orchestrator | 2025-06-01 22:23:29 | INFO  | Please wait and do not abort execution. 2025-06-01 22:23:29.848807 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:23:29.849755 | orchestrator | 2025-06-01 22:23:29.851142 | orchestrator | 2025-06-01 22:23:29.851987 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:23:29.852709 | orchestrator | Sunday 01 June 2025 22:23:29 +0000 (0:00:00.628) 0:00:08.008 *********** 2025-06-01 22:23:29.853873 | orchestrator | =============================================================================== 2025-06-01 22:23:29.854556 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.02s 2025-06-01 22:23:29.854957 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-06-01 22:23:29.856038 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-06-01 22:23:29.856187 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-06-01 22:23:29.856824 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-01 22:23:30.358118 | orchestrator | + osism apply known-hosts 2025-06-01 22:23:32.100600 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:23:32.100701 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:23:32.100716 | orchestrator | Registering Redlock._release_script 2025-06-01 22:23:32.164362 | orchestrator | 2025-06-01 22:23:32 | INFO  | Task e2f2edf5-edf7-4b3c-a4c7-9ef5fd9f57ac (known-hosts) was prepared for execution. 2025-06-01 22:23:32.164443 | orchestrator | 2025-06-01 22:23:32 | INFO  | It takes a moment until task e2f2edf5-edf7-4b3c-a4c7-9ef5fd9f57ac (known-hosts) has been started and output is visible here. 2025-06-01 22:23:36.304415 | orchestrator | 2025-06-01 22:23:36.304538 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-01 22:23:36.306662 | orchestrator | 2025-06-01 22:23:36.308187 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-01 22:23:36.309226 | orchestrator | Sunday 01 June 2025 22:23:36 +0000 (0:00:00.194) 0:00:00.194 *********** 2025-06-01 22:23:42.311042 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-01 22:23:42.312581 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-01 22:23:42.312778 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-01 22:23:42.313507 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-01 22:23:42.314261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-01 22:23:42.314709 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-01 22:23:42.315506 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-01 22:23:42.316043 | orchestrator | 2025-06-01 22:23:42.316373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-01 22:23:42.316828 | orchestrator | Sunday 01 June 2025 22:23:42 +0000 (0:00:06.008) 0:00:06.202 *********** 2025-06-01 22:23:42.468216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-01 22:23:42.468460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-01 22:23:42.468572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-01 22:23:42.469564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-01 22:23:42.471193 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-01 22:23:42.471222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-01 22:23:42.471840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-01 22:23:42.472547 | orchestrator | 2025-06-01 22:23:42.473167 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:42.474006 | orchestrator | Sunday 01 June 2025 22:23:42 +0000 (0:00:00.158) 0:00:06.360 *********** 2025-06-01 22:23:43.538394 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLB1Bvh++fFOLP461U63PhyRDt8HwR5DeoNWTTDEakQQSq2s1xFq6jfaH4r887i81jfHdLXLpxXzBJD3wtgKGM=) 2025-06-01 22:23:43.538844 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSn8FD7Xu0HqrZ+ugiqe5JRNu56E+MpmDk9hoSVmgqSIVz2CQzqfahjXCUbn0gV0Fo7/jvsOGYllJtJBoMkQpxJ9/C/tLNqky13UsW3itVVPhYHjCM/D9RFivlK17QzXpHJ6GNhwJVeR1/lz+nCqa7YoNMIrb0jZ8pdwQ72TdYI/Bhgrg+obHc+LjkVrMqmN5tOHk8a+aFfNr3PJc0uswEWlGZJUwQPP8TRrTh4oHSJR0kAfcHSrcPVOuLL1R+RkNNvyD1vrr4bFp842HZeONyOyqCMKTNPOr+jt8THB91UCkH6HcY+jBIJVdMnnBPlggdT8ZQRADJGcKBXa1ePwfHm+yxVx9VsMhDRGJPfd912yZSSfz/PfxVsmGtNTrO0CpVaffQjWV1EjoX7Cj+7ZLFvoFKJKWZ2BbFCIB5+63WnJorVhU/gfx/g227tod2stnznNIxxHRsQZZ6tEfBaFuzD27Bu5mk1epX891mpJzZ2KdRaD30clbNwh8u8ZwAB5U=) 2025-06-01 22:23:43.539538 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKy4a8gCG0NCL4KbConb+hoKHQfmLoDYsmsmEBhvUwUb) 2025-06-01 22:23:43.540011 | orchestrator | 2025-06-01 22:23:43.540469 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:43.541000 | orchestrator | Sunday 01 June 2025 22:23:43 +0000 (0:00:01.069) 0:00:07.430 *********** 2025-06-01 22:23:44.519192 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCM1qx8JNKRzeybgDa7fB+EhPXR2+AWMgIwIOr7/b0V+tBBdQWlFIUTLdzdNTbfmYSTy6PDQdUzSu/avBqVRkwQ/e9L4AoRwtRe1UulUorJPrDzu0/wJraEUoQbCANL9A+6Mmn/xmNs5aRUqwExfZGRVlgPPB2ZR8j5SPwM9HvuF9Ysmb2lITAH8cXBQA7jyhh1KWdGsfzE/KB5wAmwUz7wrZK7oqO9Qbbx/mpK5VqOiNZNXCFt7ybV7mOQ1nrqu+10X4eLdtr0m6Kji/krCaE1jDDeoVlio/Ut1DGFQxvluwhhk4ZZxmT8+NGeXePLm+5d7D7q0ZS1cxmse1ILSbEij82qvwqRkcqdjuazjBTGJOQtZItfT7xLC3LOgiwhA34gJQqWDQnml7rgKllzH7VFJZqmdNElMaMKUepWMt3cteVwb2VZG3tgScw8H+6VKvKAZVHWcdOhZ4XihMVKcGC1Z4bcjEpj2024wK7+hkabIDRFs1hel5bZnW5v6HxJlLE=) 2025-06-01 22:23:44.520025 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBiVoGhp1CYj5uS1mHY8PWBFUqE/9erZwply0y7+w7wlyLDabNBR57Csaa5+w/rg1q9YiUxxpBSDy2Q2fbQlw1g=) 2025-06-01 22:23:44.520226 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGo0vXqEWduuCA5E+4+TSp+g5wAzy0KEJKg22oQMxCY3) 2025-06-01 22:23:44.521240 | orchestrator | 2025-06-01 22:23:44.521736 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:44.521983 | orchestrator | Sunday 01 June 2025 22:23:44 +0000 (0:00:00.979) 0:00:08.409 *********** 2025-06-01 22:23:45.525720 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID01Zd3CcjKKbj4TTvEpFEGok8gLcyKp/rcf3bL5OBFK) 2025-06-01 22:23:45.527428 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbNfN0hLgv4mbMUU8xmCTBvl5CcxKobNjAcE2POB+DMc9+MM4klUAugyoQz8+fZIMse59k8AHyUFkoFptif4yWC56cDY7QbkC1hzWonHJ9v4RzXJP29iZNeEOdbJkXLJaMJ854MofN+yeV5hb+fGpnODej7uoAmnlYkLJKj77Y8aFCKecLnDbTvqaEWawkKtsf+tEzYgw+G3/LQNiWFqNeynuHoayWgd7130XTCmtgggxYUQ0RI+ky54XwnUMb6d+qPL/kAGGJUv99g7qBGtIOJoDI4DFAX9gVLtUd5+CwnfeXUcyVFzRSV7vQ0dVrqZFRz6rj1FAQzjRySUESzxTm3zu69yLcnWT02svGk7i4ZmBljSJj1UP1smXVbIOUGWzWQOsGzWGMBW9dAqHk8NHHPH292M4/ggkctDm22lhSNNPBLf+pHL4eZZ5WpL5yWvR8YTZ6bJgNSYBnewYtr+JhOLECsOYsRo/zcrpkU5bGzCXUz6umLWuGxXR5nSvQ8sM=) 2025-06-01 22:23:45.528169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXH6yU3PZmV9ZP//Dh7u/Nz+MgqYpcAfI9H6TIKN8VXdBh4sMjV468NcByAr39ETyduZNKF4ioyCuZ3uabwoBY=) 2025-06-01 22:23:45.528558 | orchestrator | 2025-06-01 22:23:45.529515 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:45.529949 | orchestrator | Sunday 01 June 2025 22:23:45 +0000 (0:00:01.006) 0:00:09.416 *********** 2025-06-01 22:23:46.735847 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqcFC3Wkmr0qsSiFh1FQ3Qj1QkmdNob43UuiKhsDByViG0ZrOwJx0hf6vcVg+mmbUp8glsq9dTT74K12GCf+gU0FXOFkT7fcLlT30f/61DRuGtKTx8QlGPwAm7x4NQmmsJGkhgO5s5R1yx6GjpXI7SPjqgaTP8uKFhXMLIr44MajUmYCJaTSwLFqpd0cRA+E+PPJQkDhzN6bfZ4qyKGbugaHsJ5tASPakBWiHf/b3Za50YsOfKOg0YmGhtpe+nPbZMDaTgQR3htkkQGf8/hCLXm/ieCuv1v5Ju+/8brBMflebvQYLEizrtbWMxHrAE5pxKvj1qWqVWQu0NtbP1xjwpd8xl28lvnMa/AGnTC4FN/rz7kGumM87yCiIENKDjNHAobWGUqg5C5+zNH51ybjmGarEsiXhLJFuHXHMX9zhNtsFMbMP12HUt+9cIVkLlpXHdw+LS3XKoNWAwzgxtvqT7Oql36TTjR3AHMys7GEnZsJioA1vonQR9m3WskvoNa70=) 2025-06-01 22:23:46.736367 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKOvVgcGFOgfXz9w8yqoN30DTlVjM9Jy8k7cyjh7Ws2jK8yF2xv1Mc32qrXqtgifLizhpnGjR1uHSqZ+4YPZSk4=) 2025-06-01 22:23:46.737516 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICGCWfyhm8NaxHCoIsL68R3zEtiF7/djG6zLzlv8wIqA) 2025-06-01 22:23:46.738283 | orchestrator | 2025-06-01 22:23:46.739285 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:46.740506 | orchestrator | Sunday 01 June 2025 22:23:46 +0000 (0:00:01.210) 0:00:10.626 *********** 2025-06-01 22:23:47.846586 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7yfPsGsOI5CF7WH42D35OKvFU+HW+lsBVDx+84b1hTbfFLwzdoGbAFqKawTYQjDYW6tUkeI5ZMSivzQl58JxFvMKGrPHh612QbjpwpTNxpYiquSRq5wCjtpQw6NovI4lDhUI0zK7go5vd+RKdx6kqT/pCr2OPeZ+omHDWaW+3DZFkb7HeCfrkehfY/Qp64Jq2vKQvIK8epaQzHvR/UjnEbN+f+7ZcB5RvEldUiepURyDcWHQ8qxPrnJOIrFk4e/f0zJIampwtF4hZWlmrCTpKMt4F3N3LTTnFlIo4ff21+3YDuqwzkDUNpvRy6FwlceVCZJjtj16rM8BJwSu9HkhKVuMQQDhlowISby1eIS2QYHQbyyeFyKlZTaMuGcEwBluBU5UdrDdor5XJb46HqGdSuIVMswKSYmgpsw6BTh2LsNOGY2/FeaQnNl0Z5dOG3lBnIiTZ58/AIwczHZnQeiX/+UXWgynxUKOX6hI5BjEmwobdxsOZPcYe5MLJqhYga+8=) 2025-06-01 22:23:47.847435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmCxkgfYnoFi+3/bt4TdovFMXb98zG5NGstjpwMAs0M) 2025-06-01 22:23:47.848194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEILj8V0PK8784PejUlJeTJLfNPFgyHA3Qfu3k1hAN9Wead14wtPXGtNH9YCReRGTvCAN0uJnw8b/lOpXPRWhWg=) 2025-06-01 22:23:47.848927 | orchestrator | 2025-06-01 22:23:47.849341 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:47.849959 | orchestrator | Sunday 01 June 2025 22:23:47 +0000 (0:00:01.110) 0:00:11.737 *********** 2025-06-01 22:23:48.910517 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFpsMWYNCypIyup29V22SswEeSSRnsaNK7EzNPF0c3Vc1iznC8+W0I+4PmvUqbez26paZyd2/z3SterYibo5qc=) 2025-06-01 22:23:48.911316 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiFekTGCTfyqalVR4s7WduCTBITucEwf9aeavzQyW8Wx33qYm2Ma3qnf9jLainG3tVpn4N9x6U9DoZ54SwgNjdle6rFP1IwDTRlGX9dwTJLfvIasAxEgMjtVwUy11GQb2pXCrIR/Ye/EGnozEWqcu3fDFdICsDe2zKUICi+Z5P6qiifjhmUzavYsyb78NQEt1HG8PguQUEv1FtvaHycQbfeLUUnPhEYjMohO068qKJDD4uFdpOqVXuD+x8xdiDRRqnoErbjSMmOISqhq+ep9Ry5H1ViaPcIuRrBm4YZv8BzH/v/Jtl5s2LY7vnsoWcaoG9ffCUN2PyDZsaVYxJ/9iG0FMCN9xeU+wTg6ghJVBf+ElD0Wx4qb9PLqeQii2Qbmnuy5dsJmwBWKaLuxgiesFPXDnJ2g3kl2ueT9IrGLlJEEweAUUcsYmopM0uGQE/FmfvWHVIp/b+rsFkH2YOBw4W+LHqdIUUmAcnEdnp/W7jA8JcTBd26oylOnkGm+hnvIs=) 2025-06-01 22:23:48.912469 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF8XSW64u6H9Q6v973L2A4kVF99INlvDLwFMJqNP+1ex) 2025-06-01 22:23:48.913091 | orchestrator | 2025-06-01 22:23:48.913643 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:48.914134 | orchestrator | Sunday 01 June 2025 22:23:48 +0000 (0:00:01.063) 0:00:12.800 *********** 2025-06-01 22:23:50.033716 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO/lJgxPlTpdLJmxsvBdm3xQ8GJizb7OHkX9Fg38TJVXjXYE7Ea6p6rvR8ETxKWpgmLZ8IQUe/N2DkvZRQjJXMjQ5XjgrNoS4fhSw9O0yEcJsNwR85/1uWEvOdKUQGQd89e2xMQjmcRPZJJsSu4LP2wT9lA6Ntfat+HY3BjtKC3ybE6PMx8AzFX50ureRsTycfqkuIfE/AnDAoS/ZN1X5TezfwIe1DOJ+VYfD97jzFfpgap3i7p5D6TuedPNZu054TPoe83rDw59b830ZR2B4yN0i0LtLxqKJYrz69ekwVKaBWmbmdMrEhUaaX3xBjV/L4S3d5AuYIwYoMeciMO7l/wrYReo0gtGr980K0WLS2rnWdUIsegxxdHrjiPVOGAVcGRY6aIqX3h7R0ZX454t+ZDaj0irODBGMZ663nW0uRVNi6ghJwCGcvhiAovgZ1A0JHcAQ/Cjqc2G9PE4ZZdIMZn4HY0iTTNidUcRSulFzoKuGiWLHYpIAF181SHYWr4ns=) 2025-06-01 22:23:50.034258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAuwBG6uajGVa57LFNU8xGkj0g0wxOhTagdphDaiSMTvZbZOuXPjtwjSJJHTXKIJ1JA2i3iOwvkDWIrKJn3fYZs=) 2025-06-01 22:23:50.035411 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGXxcI9SvpPpRDb2O88Gy5uUqpfqUiDzEdwb9fu9qkZ) 2025-06-01 22:23:50.036566 | orchestrator | 2025-06-01 22:23:50.037588 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-01 22:23:50.038098 | orchestrator | Sunday 01 June 2025 22:23:50 +0000 (0:00:01.123) 0:00:13.924 *********** 2025-06-01 22:23:55.426197 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-01 22:23:55.427318 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-01 22:23:55.427902 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-01 22:23:55.428585 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-01 22:23:55.428974 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-01 22:23:55.430275 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-01 22:23:55.430308 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-01 22:23:55.431282 | orchestrator | 2025-06-01 22:23:55.431423 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-01 22:23:55.431758 | orchestrator | Sunday 01 June 2025 22:23:55 +0000 (0:00:05.393) 0:00:19.317 *********** 2025-06-01 22:23:55.590670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-01 22:23:55.591288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-01 22:23:55.591354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-01 22:23:55.592294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-01 22:23:55.592756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-01 22:23:55.593271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-01 22:23:55.593724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-01 22:23:55.594272 | orchestrator | 2025-06-01 22:23:55.594754 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:55.595158 | orchestrator | Sunday 01 June 2025 22:23:55 +0000 (0:00:00.164) 0:00:19.482 *********** 2025-06-01 22:23:56.601847 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSn8FD7Xu0HqrZ+ugiqe5JRNu56E+MpmDk9hoSVmgqSIVz2CQzqfahjXCUbn0gV0Fo7/jvsOGYllJtJBoMkQpxJ9/C/tLNqky13UsW3itVVPhYHjCM/D9RFivlK17QzXpHJ6GNhwJVeR1/lz+nCqa7YoNMIrb0jZ8pdwQ72TdYI/Bhgrg+obHc+LjkVrMqmN5tOHk8a+aFfNr3PJc0uswEWlGZJUwQPP8TRrTh4oHSJR0kAfcHSrcPVOuLL1R+RkNNvyD1vrr4bFp842HZeONyOyqCMKTNPOr+jt8THB91UCkH6HcY+jBIJVdMnnBPlggdT8ZQRADJGcKBXa1ePwfHm+yxVx9VsMhDRGJPfd912yZSSfz/PfxVsmGtNTrO0CpVaffQjWV1EjoX7Cj+7ZLFvoFKJKWZ2BbFCIB5+63WnJorVhU/gfx/g227tod2stnznNIxxHRsQZZ6tEfBaFuzD27Bu5mk1epX891mpJzZ2KdRaD30clbNwh8u8ZwAB5U=) 2025-06-01 22:23:56.602757 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLLB1Bvh++fFOLP461U63PhyRDt8HwR5DeoNWTTDEakQQSq2s1xFq6jfaH4r887i81jfHdLXLpxXzBJD3wtgKGM=) 2025-06-01 22:23:56.603614 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKy4a8gCG0NCL4KbConb+hoKHQfmLoDYsmsmEBhvUwUb) 2025-06-01 22:23:56.604428 | orchestrator | 2025-06-01 22:23:56.606292 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:56.606795 | orchestrator | Sunday 01 June 2025 22:23:56 +0000 (0:00:01.010) 0:00:20.493 *********** 2025-06-01 22:23:57.613881 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCM1qx8JNKRzeybgDa7fB+EhPXR2+AWMgIwIOr7/b0V+tBBdQWlFIUTLdzdNTbfmYSTy6PDQdUzSu/avBqVRkwQ/e9L4AoRwtRe1UulUorJPrDzu0/wJraEUoQbCANL9A+6Mmn/xmNs5aRUqwExfZGRVlgPPB2ZR8j5SPwM9HvuF9Ysmb2lITAH8cXBQA7jyhh1KWdGsfzE/KB5wAmwUz7wrZK7oqO9Qbbx/mpK5VqOiNZNXCFt7ybV7mOQ1nrqu+10X4eLdtr0m6Kji/krCaE1jDDeoVlio/Ut1DGFQxvluwhhk4ZZxmT8+NGeXePLm+5d7D7q0ZS1cxmse1ILSbEij82qvwqRkcqdjuazjBTGJOQtZItfT7xLC3LOgiwhA34gJQqWDQnml7rgKllzH7VFJZqmdNElMaMKUepWMt3cteVwb2VZG3tgScw8H+6VKvKAZVHWcdOhZ4XihMVKcGC1Z4bcjEpj2024wK7+hkabIDRFs1hel5bZnW5v6HxJlLE=) 2025-06-01 22:23:57.614091 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBiVoGhp1CYj5uS1mHY8PWBFUqE/9erZwply0y7+w7wlyLDabNBR57Csaa5+w/rg1q9YiUxxpBSDy2Q2fbQlw1g=) 2025-06-01 22:23:57.614132 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGo0vXqEWduuCA5E+4+TSp+g5wAzy0KEJKg22oQMxCY3) 2025-06-01 22:23:57.615027 | orchestrator | 2025-06-01 22:23:57.616090 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:57.617427 | orchestrator | Sunday 01 June 2025 22:23:57 +0000 (0:00:01.011) 0:00:21.505 *********** 2025-06-01 22:23:58.585786 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXH6yU3PZmV9ZP//Dh7u/Nz+MgqYpcAfI9H6TIKN8VXdBh4sMjV468NcByAr39ETyduZNKF4ioyCuZ3uabwoBY=) 2025-06-01 22:23:58.586129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbNfN0hLgv4mbMUU8xmCTBvl5CcxKobNjAcE2POB+DMc9+MM4klUAugyoQz8+fZIMse59k8AHyUFkoFptif4yWC56cDY7QbkC1hzWonHJ9v4RzXJP29iZNeEOdbJkXLJaMJ854MofN+yeV5hb+fGpnODej7uoAmnlYkLJKj77Y8aFCKecLnDbTvqaEWawkKtsf+tEzYgw+G3/LQNiWFqNeynuHoayWgd7130XTCmtgggxYUQ0RI+ky54XwnUMb6d+qPL/kAGGJUv99g7qBGtIOJoDI4DFAX9gVLtUd5+CwnfeXUcyVFzRSV7vQ0dVrqZFRz6rj1FAQzjRySUESzxTm3zu69yLcnWT02svGk7i4ZmBljSJj1UP1smXVbIOUGWzWQOsGzWGMBW9dAqHk8NHHPH292M4/ggkctDm22lhSNNPBLf+pHL4eZZ5WpL5yWvR8YTZ6bJgNSYBnewYtr+JhOLECsOYsRo/zcrpkU5bGzCXUz6umLWuGxXR5nSvQ8sM=) 2025-06-01 22:23:58.586660 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID01Zd3CcjKKbj4TTvEpFEGok8gLcyKp/rcf3bL5OBFK) 2025-06-01 22:23:58.588234 | orchestrator | 2025-06-01 22:23:58.589049 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:58.590244 | orchestrator | Sunday 01 June 2025 22:23:58 +0000 (0:00:00.971) 0:00:22.477 *********** 2025-06-01 22:23:59.565153 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqcFC3Wkmr0qsSiFh1FQ3Qj1QkmdNob43UuiKhsDByViG0ZrOwJx0hf6vcVg+mmbUp8glsq9dTT74K12GCf+gU0FXOFkT7fcLlT30f/61DRuGtKTx8QlGPwAm7x4NQmmsJGkhgO5s5R1yx6GjpXI7SPjqgaTP8uKFhXMLIr44MajUmYCJaTSwLFqpd0cRA+E+PPJQkDhzN6bfZ4qyKGbugaHsJ5tASPakBWiHf/b3Za50YsOfKOg0YmGhtpe+nPbZMDaTgQR3htkkQGf8/hCLXm/ieCuv1v5Ju+/8brBMflebvQYLEizrtbWMxHrAE5pxKvj1qWqVWQu0NtbP1xjwpd8xl28lvnMa/AGnTC4FN/rz7kGumM87yCiIENKDjNHAobWGUqg5C5+zNH51ybjmGarEsiXhLJFuHXHMX9zhNtsFMbMP12HUt+9cIVkLlpXHdw+LS3XKoNWAwzgxtvqT7Oql36TTjR3AHMys7GEnZsJioA1vonQR9m3WskvoNa70=) 2025-06-01 22:23:59.565262 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKOvVgcGFOgfXz9w8yqoN30DTlVjM9Jy8k7cyjh7Ws2jK8yF2xv1Mc32qrXqtgifLizhpnGjR1uHSqZ+4YPZSk4=) 2025-06-01 22:23:59.566463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICGCWfyhm8NaxHCoIsL68R3zEtiF7/djG6zLzlv8wIqA) 2025-06-01 22:23:59.566612 | orchestrator | 2025-06-01 22:23:59.567307 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:23:59.567447 | orchestrator | Sunday 01 June 2025 22:23:59 +0000 (0:00:00.979) 0:00:23.457 *********** 2025-06-01 22:24:00.632293 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEILj8V0PK8784PejUlJeTJLfNPFgyHA3Qfu3k1hAN9Wead14wtPXGtNH9YCReRGTvCAN0uJnw8b/lOpXPRWhWg=) 2025-06-01 22:24:00.633444 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7yfPsGsOI5CF7WH42D35OKvFU+HW+lsBVDx+84b1hTbfFLwzdoGbAFqKawTYQjDYW6tUkeI5ZMSivzQl58JxFvMKGrPHh612QbjpwpTNxpYiquSRq5wCjtpQw6NovI4lDhUI0zK7go5vd+RKdx6kqT/pCr2OPeZ+omHDWaW+3DZFkb7HeCfrkehfY/Qp64Jq2vKQvIK8epaQzHvR/UjnEbN+f+7ZcB5RvEldUiepURyDcWHQ8qxPrnJOIrFk4e/f0zJIampwtF4hZWlmrCTpKMt4F3N3LTTnFlIo4ff21+3YDuqwzkDUNpvRy6FwlceVCZJjtj16rM8BJwSu9HkhKVuMQQDhlowISby1eIS2QYHQbyyeFyKlZTaMuGcEwBluBU5UdrDdor5XJb46HqGdSuIVMswKSYmgpsw6BTh2LsNOGY2/FeaQnNl0Z5dOG3lBnIiTZ58/AIwczHZnQeiX/+UXWgynxUKOX6hI5BjEmwobdxsOZPcYe5MLJqhYga+8=) 2025-06-01 22:24:00.633505 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmCxkgfYnoFi+3/bt4TdovFMXb98zG5NGstjpwMAs0M) 2025-06-01 22:24:00.633622 | orchestrator | 2025-06-01 22:24:00.634317 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:24:00.634881 | orchestrator | Sunday 01 June 2025 22:24:00 +0000 (0:00:01.066) 0:00:24.523 *********** 2025-06-01 22:24:01.703031 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF8XSW64u6H9Q6v973L2A4kVF99INlvDLwFMJqNP+1ex) 2025-06-01 22:24:01.705108 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiFekTGCTfyqalVR4s7WduCTBITucEwf9aeavzQyW8Wx33qYm2Ma3qnf9jLainG3tVpn4N9x6U9DoZ54SwgNjdle6rFP1IwDTRlGX9dwTJLfvIasAxEgMjtVwUy11GQb2pXCrIR/Ye/EGnozEWqcu3fDFdICsDe2zKUICi+Z5P6qiifjhmUzavYsyb78NQEt1HG8PguQUEv1FtvaHycQbfeLUUnPhEYjMohO068qKJDD4uFdpOqVXuD+x8xdiDRRqnoErbjSMmOISqhq+ep9Ry5H1ViaPcIuRrBm4YZv8BzH/v/Jtl5s2LY7vnsoWcaoG9ffCUN2PyDZsaVYxJ/9iG0FMCN9xeU+wTg6ghJVBf+ElD0Wx4qb9PLqeQii2Qbmnuy5dsJmwBWKaLuxgiesFPXDnJ2g3kl2ueT9IrGLlJEEweAUUcsYmopM0uGQE/FmfvWHVIp/b+rsFkH2YOBw4W+LHqdIUUmAcnEdnp/W7jA8JcTBd26oylOnkGm+hnvIs=) 2025-06-01 22:24:01.705778 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFpsMWYNCypIyup29V22SswEeSSRnsaNK7EzNPF0c3Vc1iznC8+W0I+4PmvUqbez26paZyd2/z3SterYibo5qc=) 2025-06-01 22:24:01.705919 | orchestrator | 2025-06-01 22:24:01.707514 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-01 22:24:01.707964 | orchestrator | Sunday 01 June 2025 22:24:01 +0000 (0:00:01.071) 0:00:25.594 *********** 2025-06-01 22:24:02.788475 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO/lJgxPlTpdLJmxsvBdm3xQ8GJizb7OHkX9Fg38TJVXjXYE7Ea6p6rvR8ETxKWpgmLZ8IQUe/N2DkvZRQjJXMjQ5XjgrNoS4fhSw9O0yEcJsNwR85/1uWEvOdKUQGQd89e2xMQjmcRPZJJsSu4LP2wT9lA6Ntfat+HY3BjtKC3ybE6PMx8AzFX50ureRsTycfqkuIfE/AnDAoS/ZN1X5TezfwIe1DOJ+VYfD97jzFfpgap3i7p5D6TuedPNZu054TPoe83rDw59b830ZR2B4yN0i0LtLxqKJYrz69ekwVKaBWmbmdMrEhUaaX3xBjV/L4S3d5AuYIwYoMeciMO7l/wrYReo0gtGr980K0WLS2rnWdUIsegxxdHrjiPVOGAVcGRY6aIqX3h7R0ZX454t+ZDaj0irODBGMZ663nW0uRVNi6ghJwCGcvhiAovgZ1A0JHcAQ/Cjqc2G9PE4ZZdIMZn4HY0iTTNidUcRSulFzoKuGiWLHYpIAF181SHYWr4ns=) 2025-06-01 22:24:02.789129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGXxcI9SvpPpRDb2O88Gy5uUqpfqUiDzEdwb9fu9qkZ) 2025-06-01 22:24:02.790075 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAuwBG6uajGVa57LFNU8xGkj0g0wxOhTagdphDaiSMTvZbZOuXPjtwjSJJHTXKIJ1JA2i3iOwvkDWIrKJn3fYZs=) 2025-06-01 22:24:02.790747 | orchestrator | 2025-06-01 22:24:02.791511 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-01 22:24:02.792673 | orchestrator | Sunday 01 June 2025 22:24:02 +0000 (0:00:01.084) 0:00:26.679 *********** 2025-06-01 22:24:02.951482 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-01 22:24:02.951562 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-01 22:24:02.952672 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-01 22:24:02.953980 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-01 22:24:02.954422 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 22:24:02.955108 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-01 22:24:02.955495 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-01 22:24:02.955970 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:24:02.956446 | orchestrator | 2025-06-01 22:24:02.956900 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-01 22:24:02.957268 | orchestrator | Sunday 01 June 2025 22:24:02 +0000 (0:00:00.164) 0:00:26.844 *********** 2025-06-01 22:24:03.023980 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:24:03.024126 | orchestrator | 2025-06-01 22:24:03.025669 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-01 22:24:03.026531 | orchestrator | Sunday 01 June 2025 22:24:03 +0000 (0:00:00.072) 0:00:26.916 *********** 2025-06-01 22:24:03.082093 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:24:03.082608 | orchestrator | 2025-06-01 22:24:03.083311 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-01 22:24:03.084063 | orchestrator | Sunday 01 June 2025 22:24:03 +0000 (0:00:00.056) 0:00:26.973 *********** 2025-06-01 22:24:03.738113 | orchestrator | changed: [testbed-manager] 2025-06-01 22:24:03.738278 | orchestrator | 2025-06-01 22:24:03.739532 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:24:03.740222 | orchestrator | 2025-06-01 22:24:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:24:03.740251 | orchestrator | 2025-06-01 22:24:03 | INFO  | Please wait and do not abort execution. 2025-06-01 22:24:03.741374 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 22:24:03.742170 | orchestrator | 2025-06-01 22:24:03.743474 | orchestrator | 2025-06-01 22:24:03.744646 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:24:03.745411 | orchestrator | Sunday 01 June 2025 22:24:03 +0000 (0:00:00.656) 0:00:27.629 *********** 2025-06-01 22:24:03.745873 | orchestrator | =============================================================================== 2025-06-01 22:24:03.746876 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.01s 2025-06-01 22:24:03.747686 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.39s 2025-06-01 22:24:03.748304 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-06-01 22:24:03.748948 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-06-01 22:24:03.749857 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-01 22:24:03.750527 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-01 22:24:03.751042 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 22:24:03.751472 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 22:24:03.752172 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-01 22:24:03.752447 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-01 22:24:03.752923 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-01 22:24:03.753441 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-01 22:24:03.753670 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-01 22:24:03.754185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-06-01 22:24:03.754951 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-06-01 22:24:03.755510 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-06-01 22:24:03.755776 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.66s 2025-06-01 22:24:03.756520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-01 22:24:03.756618 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-01 22:24:03.757990 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-06-01 22:24:04.235481 | orchestrator | + osism apply squid 2025-06-01 22:24:05.954508 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:24:05.954635 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:24:05.954651 | orchestrator | Registering Redlock._release_script 2025-06-01 22:24:06.018075 | orchestrator | 2025-06-01 22:24:06 | INFO  | Task ea17df4a-a9fa-438d-9dfd-e48c86c81d33 (squid) was prepared for execution. 2025-06-01 22:24:06.020296 | orchestrator | 2025-06-01 22:24:06 | INFO  | It takes a moment until task ea17df4a-a9fa-438d-9dfd-e48c86c81d33 (squid) has been started and output is visible here. 2025-06-01 22:24:10.126071 | orchestrator | 2025-06-01 22:24:10.126187 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-01 22:24:10.126980 | orchestrator | 2025-06-01 22:24:10.127517 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-01 22:24:10.128396 | orchestrator | Sunday 01 June 2025 22:24:10 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-06-01 22:24:10.221853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 22:24:10.222533 | orchestrator | 2025-06-01 22:24:10.223978 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-01 22:24:10.224745 | orchestrator | Sunday 01 June 2025 22:24:10 +0000 (0:00:00.098) 0:00:00.266 *********** 2025-06-01 22:24:11.676092 | orchestrator | ok: [testbed-manager] 2025-06-01 22:24:11.676796 | orchestrator | 2025-06-01 22:24:11.677881 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-01 22:24:11.679415 | orchestrator | Sunday 01 June 2025 22:24:11 +0000 (0:00:01.452) 0:00:01.719 *********** 2025-06-01 22:24:12.932205 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-01 22:24:12.932822 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-01 22:24:12.933597 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-01 22:24:12.934069 | orchestrator | 2025-06-01 22:24:12.934667 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-01 22:24:12.935603 | orchestrator | Sunday 01 June 2025 22:24:12 +0000 (0:00:01.255) 0:00:02.974 *********** 2025-06-01 22:24:14.031498 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-01 22:24:14.031673 | orchestrator | 2025-06-01 22:24:14.032333 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-01 22:24:14.035044 | orchestrator | Sunday 01 June 2025 22:24:14 +0000 (0:00:01.099) 0:00:04.074 *********** 2025-06-01 22:24:14.383773 | orchestrator | ok: [testbed-manager] 2025-06-01 22:24:14.384819 | orchestrator | 2025-06-01 22:24:14.385900 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-01 22:24:14.386719 | orchestrator | Sunday 01 June 2025 22:24:14 +0000 (0:00:00.353) 0:00:04.427 *********** 2025-06-01 22:24:15.338636 | orchestrator | changed: [testbed-manager] 2025-06-01 22:24:15.338796 | orchestrator | 2025-06-01 22:24:15.339047 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-01 22:24:15.339754 | orchestrator | Sunday 01 June 2025 22:24:15 +0000 (0:00:00.954) 0:00:05.382 *********** 2025-06-01 22:24:47.396802 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-01 22:24:47.397324 | orchestrator | ok: [testbed-manager] 2025-06-01 22:24:47.398089 | orchestrator | 2025-06-01 22:24:47.398802 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-01 22:24:47.399420 | orchestrator | Sunday 01 June 2025 22:24:47 +0000 (0:00:32.053) 0:00:37.435 *********** 2025-06-01 22:24:59.898288 | orchestrator | changed: [testbed-manager] 2025-06-01 22:24:59.898459 | orchestrator | 2025-06-01 22:24:59.898476 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-01 22:24:59.898488 | orchestrator | Sunday 01 June 2025 22:24:59 +0000 (0:00:12.499) 0:00:49.935 *********** 2025-06-01 22:25:59.968574 | orchestrator | Pausing for 60 seconds 2025-06-01 22:25:59.968736 | orchestrator | changed: [testbed-manager] 2025-06-01 22:25:59.970366 | orchestrator | 2025-06-01 22:25:59.971760 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-01 22:25:59.972661 | orchestrator | Sunday 01 June 2025 22:25:59 +0000 (0:01:00.073) 0:01:50.008 *********** 2025-06-01 22:26:00.054828 | orchestrator | ok: [testbed-manager] 2025-06-01 22:26:00.054946 | orchestrator | 2025-06-01 22:26:00.055718 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-01 22:26:00.055741 | orchestrator | Sunday 01 June 2025 22:26:00 +0000 (0:00:00.086) 0:01:50.095 *********** 2025-06-01 22:26:00.755111 | orchestrator | changed: [testbed-manager] 2025-06-01 22:26:00.755215 | orchestrator | 2025-06-01 22:26:00.755901 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:26:00.755949 | orchestrator | 2025-06-01 22:26:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:26:00.756056 | orchestrator | 2025-06-01 22:26:00 | INFO  | Please wait and do not abort execution. 2025-06-01 22:26:00.756695 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:26:00.757887 | orchestrator | 2025-06-01 22:26:00.757909 | orchestrator | 2025-06-01 22:26:00.758971 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:26:00.759065 | orchestrator | Sunday 01 June 2025 22:26:00 +0000 (0:00:00.703) 0:01:50.799 *********** 2025-06-01 22:26:00.760160 | orchestrator | =============================================================================== 2025-06-01 22:26:00.760737 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-01 22:26:00.761032 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.05s 2025-06-01 22:26:00.761763 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.50s 2025-06-01 22:26:00.762075 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.45s 2025-06-01 22:26:00.762795 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.26s 2025-06-01 22:26:00.763088 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.10s 2025-06-01 22:26:00.763529 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-06-01 22:26:00.763708 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2025-06-01 22:26:00.764289 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-06-01 22:26:00.764653 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-06-01 22:26:00.765018 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2025-06-01 22:26:01.274798 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 22:26:01.275175 | orchestrator | ++ semver latest 9.0.0 2025-06-01 22:26:01.326354 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-01 22:26:01.326438 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-01 22:26:01.328071 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-01 22:26:03.010971 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:26:03.011064 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:26:03.011078 | orchestrator | Registering Redlock._release_script 2025-06-01 22:26:03.072277 | orchestrator | 2025-06-01 22:26:03 | INFO  | Task 04fdec87-72e8-446c-ab29-c759afe6fc62 (operator) was prepared for execution. 2025-06-01 22:26:03.072354 | orchestrator | 2025-06-01 22:26:03 | INFO  | It takes a moment until task 04fdec87-72e8-446c-ab29-c759afe6fc62 (operator) has been started and output is visible here. 2025-06-01 22:26:07.241737 | orchestrator | 2025-06-01 22:26:07.241962 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-01 22:26:07.243017 | orchestrator | 2025-06-01 22:26:07.244042 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-01 22:26:07.245416 | orchestrator | Sunday 01 June 2025 22:26:07 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-06-01 22:26:10.452830 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:26:10.453075 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:26:10.454150 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:10.454878 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:10.455273 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:26:10.456069 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:10.456495 | orchestrator | 2025-06-01 22:26:10.457122 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-01 22:26:10.457722 | orchestrator | Sunday 01 June 2025 22:26:10 +0000 (0:00:03.212) 0:00:03.365 *********** 2025-06-01 22:26:11.272879 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:26:11.274340 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:11.275709 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:26:11.276316 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:11.277030 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:26:11.277303 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:11.278473 | orchestrator | 2025-06-01 22:26:11.278750 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-01 22:26:11.279252 | orchestrator | 2025-06-01 22:26:11.280006 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-01 22:26:11.280539 | orchestrator | Sunday 01 June 2025 22:26:11 +0000 (0:00:00.821) 0:00:04.186 *********** 2025-06-01 22:26:11.343420 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:26:11.364623 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:26:11.386926 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:26:11.452727 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:11.452796 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:11.453536 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:11.453980 | orchestrator | 2025-06-01 22:26:11.454554 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-01 22:26:11.458186 | orchestrator | Sunday 01 June 2025 22:26:11 +0000 (0:00:00.177) 0:00:04.363 *********** 2025-06-01 22:26:11.531525 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:26:11.564905 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:26:11.587646 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:26:11.647291 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:11.647637 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:11.647839 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:11.648736 | orchestrator | 2025-06-01 22:26:11.649032 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-01 22:26:11.649537 | orchestrator | Sunday 01 June 2025 22:26:11 +0000 (0:00:00.197) 0:00:04.560 *********** 2025-06-01 22:26:12.326375 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:12.326786 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:12.328110 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:12.328735 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:12.329974 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:12.331069 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:12.331878 | orchestrator | 2025-06-01 22:26:12.332550 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-01 22:26:12.333233 | orchestrator | Sunday 01 June 2025 22:26:12 +0000 (0:00:00.679) 0:00:05.240 *********** 2025-06-01 22:26:13.138364 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:13.138567 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:13.139940 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:13.140892 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:13.141631 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:13.142305 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:13.143031 | orchestrator | 2025-06-01 22:26:13.143522 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-01 22:26:13.144136 | orchestrator | Sunday 01 June 2025 22:26:13 +0000 (0:00:00.811) 0:00:06.051 *********** 2025-06-01 22:26:14.342505 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-01 22:26:14.342957 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-01 22:26:14.344288 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-01 22:26:14.344713 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-01 22:26:14.345856 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-01 22:26:14.345903 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-01 22:26:14.346667 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-01 22:26:14.347742 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-01 22:26:14.347762 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-01 22:26:14.348121 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-01 22:26:14.348639 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-01 22:26:14.349246 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-01 22:26:14.349912 | orchestrator | 2025-06-01 22:26:14.350228 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-01 22:26:14.350859 | orchestrator | Sunday 01 June 2025 22:26:14 +0000 (0:00:01.202) 0:00:07.254 *********** 2025-06-01 22:26:15.615328 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:15.616214 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:15.616244 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:15.616544 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:15.617065 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:15.617512 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:15.617930 | orchestrator | 2025-06-01 22:26:15.619622 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-01 22:26:15.620157 | orchestrator | Sunday 01 June 2025 22:26:15 +0000 (0:00:01.273) 0:00:08.527 *********** 2025-06-01 22:26:16.803523 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-01 22:26:16.803629 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-01 22:26:16.805808 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-01 22:26:16.837107 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:26:16.837978 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:26:16.838355 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:26:16.839185 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:26:16.839960 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:26:16.841742 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-01 22:26:16.841975 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-01 22:26:16.842815 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-01 22:26:16.843682 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-01 22:26:16.844723 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-01 22:26:16.846008 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-01 22:26:16.847006 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-01 22:26:16.847798 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:26:16.848258 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:26:16.849236 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:26:16.849949 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:26:16.851224 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:26:16.852171 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-01 22:26:16.853378 | orchestrator | 2025-06-01 22:26:16.853983 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-01 22:26:16.854863 | orchestrator | Sunday 01 June 2025 22:26:16 +0000 (0:00:01.223) 0:00:09.751 *********** 2025-06-01 22:26:17.424941 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:17.425640 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:17.425843 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:17.427471 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:17.427499 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:17.428161 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:17.428798 | orchestrator | 2025-06-01 22:26:17.429256 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-01 22:26:17.429489 | orchestrator | Sunday 01 June 2025 22:26:17 +0000 (0:00:00.586) 0:00:10.337 *********** 2025-06-01 22:26:17.523877 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:26:17.548447 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:26:17.602663 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:26:17.602740 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:17.603869 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:17.607332 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:17.607591 | orchestrator | 2025-06-01 22:26:17.608367 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-01 22:26:17.609109 | orchestrator | Sunday 01 June 2025 22:26:17 +0000 (0:00:00.178) 0:00:10.515 *********** 2025-06-01 22:26:18.275637 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 22:26:18.275812 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 22:26:18.276690 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:18.277370 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 22:26:18.278345 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:18.278547 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:18.279130 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-01 22:26:18.279955 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-01 22:26:18.280267 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 22:26:18.280744 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:18.281042 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:18.281313 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:18.281796 | orchestrator | 2025-06-01 22:26:18.282637 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-01 22:26:18.282660 | orchestrator | Sunday 01 June 2025 22:26:18 +0000 (0:00:00.673) 0:00:11.188 *********** 2025-06-01 22:26:18.345484 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:26:18.366697 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:26:18.388090 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:26:18.410329 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:18.447372 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:18.447915 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:18.447999 | orchestrator | 2025-06-01 22:26:18.448432 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-01 22:26:18.448670 | orchestrator | Sunday 01 June 2025 22:26:18 +0000 (0:00:00.172) 0:00:11.361 *********** 2025-06-01 22:26:18.499325 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:26:18.542722 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:26:18.576656 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:26:18.622221 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:18.623739 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:18.627027 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:18.627070 | orchestrator | 2025-06-01 22:26:18.627083 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-01 22:26:18.627096 | orchestrator | Sunday 01 June 2025 22:26:18 +0000 (0:00:00.174) 0:00:11.535 *********** 2025-06-01 22:26:18.678542 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:26:18.700633 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:26:18.724828 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:26:18.746850 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:18.777175 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:18.778530 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:18.778767 | orchestrator | 2025-06-01 22:26:18.780053 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-01 22:26:18.781211 | orchestrator | Sunday 01 June 2025 22:26:18 +0000 (0:00:00.155) 0:00:11.690 *********** 2025-06-01 22:26:19.397714 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:19.397921 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:19.398815 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:19.400600 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:19.401465 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:19.401608 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:19.402184 | orchestrator | 2025-06-01 22:26:19.402819 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-01 22:26:19.403273 | orchestrator | Sunday 01 June 2025 22:26:19 +0000 (0:00:00.620) 0:00:12.311 *********** 2025-06-01 22:26:19.464433 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:26:19.525865 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:26:19.622996 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:26:19.623088 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:19.623154 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:19.623691 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:19.623915 | orchestrator | 2025-06-01 22:26:19.625652 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:26:19.625696 | orchestrator | 2025-06-01 22:26:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:26:19.625908 | orchestrator | 2025-06-01 22:26:19 | INFO  | Please wait and do not abort execution. 2025-06-01 22:26:19.626926 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:26:19.627741 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:26:19.630947 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:26:19.630990 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:26:19.631849 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:26:19.632350 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:26:19.634999 | orchestrator | 2025-06-01 22:26:19.635805 | orchestrator | 2025-06-01 22:26:19.636527 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:26:19.637022 | orchestrator | Sunday 01 June 2025 22:26:19 +0000 (0:00:00.225) 0:00:12.536 *********** 2025-06-01 22:26:19.637900 | orchestrator | =============================================================================== 2025-06-01 22:26:19.638784 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2025-06-01 22:26:19.639008 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-06-01 22:26:19.639904 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2025-06-01 22:26:19.640647 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.20s 2025-06-01 22:26:19.641515 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2025-06-01 22:26:19.642129 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-06-01 22:26:19.642645 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.68s 2025-06-01 22:26:19.643118 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2025-06-01 22:26:19.643617 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-06-01 22:26:19.644178 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-01 22:26:19.645247 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-06-01 22:26:19.646609 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-06-01 22:26:19.647728 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2025-06-01 22:26:19.649213 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-06-01 22:26:19.650438 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-06-01 22:26:19.651660 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-06-01 22:26:19.652439 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-06-01 22:26:20.118973 | orchestrator | + osism apply --environment custom facts 2025-06-01 22:26:21.773196 | orchestrator | 2025-06-01 22:26:21 | INFO  | Trying to run play facts in environment custom 2025-06-01 22:26:21.777442 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:26:21.777484 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:26:21.777491 | orchestrator | Registering Redlock._release_script 2025-06-01 22:26:21.836757 | orchestrator | 2025-06-01 22:26:21 | INFO  | Task 676992c9-0500-443c-a42f-36d593b027e7 (facts) was prepared for execution. 2025-06-01 22:26:21.836848 | orchestrator | 2025-06-01 22:26:21 | INFO  | It takes a moment until task 676992c9-0500-443c-a42f-36d593b027e7 (facts) has been started and output is visible here. 2025-06-01 22:26:25.749580 | orchestrator | 2025-06-01 22:26:25.749685 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-01 22:26:25.754075 | orchestrator | 2025-06-01 22:26:25.754148 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 22:26:25.754261 | orchestrator | Sunday 01 June 2025 22:26:25 +0000 (0:00:00.087) 0:00:00.087 *********** 2025-06-01 22:26:27.291110 | orchestrator | ok: [testbed-manager] 2025-06-01 22:26:27.291277 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:27.291729 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:27.292033 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:27.294142 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:27.294246 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:27.294260 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:27.294272 | orchestrator | 2025-06-01 22:26:27.294354 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-01 22:26:27.295288 | orchestrator | Sunday 01 June 2025 22:26:27 +0000 (0:00:01.542) 0:00:01.630 *********** 2025-06-01 22:26:28.478596 | orchestrator | ok: [testbed-manager] 2025-06-01 22:26:28.478896 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:26:28.478925 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:28.480741 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:28.482384 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:26:28.483352 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:28.484156 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:26:28.485137 | orchestrator | 2025-06-01 22:26:28.485936 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-01 22:26:28.486838 | orchestrator | 2025-06-01 22:26:28.488131 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 22:26:28.489901 | orchestrator | Sunday 01 June 2025 22:26:28 +0000 (0:00:01.189) 0:00:02.819 *********** 2025-06-01 22:26:28.602339 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:28.605958 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:28.606805 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:28.607196 | orchestrator | 2025-06-01 22:26:28.607976 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 22:26:28.608872 | orchestrator | Sunday 01 June 2025 22:26:28 +0000 (0:00:00.124) 0:00:02.943 *********** 2025-06-01 22:26:28.804949 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:28.805115 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:28.807420 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:28.809031 | orchestrator | 2025-06-01 22:26:28.809472 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 22:26:28.810152 | orchestrator | Sunday 01 June 2025 22:26:28 +0000 (0:00:00.202) 0:00:03.146 *********** 2025-06-01 22:26:29.000709 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:29.001228 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:29.002772 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:29.003993 | orchestrator | 2025-06-01 22:26:29.004298 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 22:26:29.005033 | orchestrator | Sunday 01 June 2025 22:26:28 +0000 (0:00:00.194) 0:00:03.341 *********** 2025-06-01 22:26:29.161311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:26:29.161451 | orchestrator | 2025-06-01 22:26:29.161786 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 22:26:29.162219 | orchestrator | Sunday 01 June 2025 22:26:29 +0000 (0:00:00.161) 0:00:03.502 *********** 2025-06-01 22:26:29.599772 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:29.599926 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:29.600820 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:29.602506 | orchestrator | 2025-06-01 22:26:29.603256 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 22:26:29.604170 | orchestrator | Sunday 01 June 2025 22:26:29 +0000 (0:00:00.437) 0:00:03.940 *********** 2025-06-01 22:26:29.709096 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:29.709977 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:29.710945 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:29.713181 | orchestrator | 2025-06-01 22:26:29.713926 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 22:26:29.714083 | orchestrator | Sunday 01 June 2025 22:26:29 +0000 (0:00:00.109) 0:00:04.050 *********** 2025-06-01 22:26:30.782192 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:30.782583 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:30.783378 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:30.784997 | orchestrator | 2025-06-01 22:26:30.786224 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 22:26:30.787109 | orchestrator | Sunday 01 June 2025 22:26:30 +0000 (0:00:01.070) 0:00:05.120 *********** 2025-06-01 22:26:31.235649 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:31.236779 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:31.237234 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:31.238643 | orchestrator | 2025-06-01 22:26:31.240164 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 22:26:31.241209 | orchestrator | Sunday 01 June 2025 22:26:31 +0000 (0:00:00.453) 0:00:05.573 *********** 2025-06-01 22:26:32.275642 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:32.276540 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:32.276696 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:32.277897 | orchestrator | 2025-06-01 22:26:32.277921 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 22:26:32.278500 | orchestrator | Sunday 01 June 2025 22:26:32 +0000 (0:00:01.042) 0:00:06.616 *********** 2025-06-01 22:26:46.014793 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:46.014913 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:46.014929 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:46.014941 | orchestrator | 2025-06-01 22:26:46.014954 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-01 22:26:46.015032 | orchestrator | Sunday 01 June 2025 22:26:46 +0000 (0:00:13.732) 0:00:20.349 *********** 2025-06-01 22:26:46.135076 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:26:46.135981 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:26:46.137516 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:26:46.138445 | orchestrator | 2025-06-01 22:26:46.139760 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-01 22:26:46.140428 | orchestrator | Sunday 01 June 2025 22:26:46 +0000 (0:00:00.125) 0:00:20.475 *********** 2025-06-01 22:26:53.133819 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:26:53.135804 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:26:53.137039 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:26:53.138221 | orchestrator | 2025-06-01 22:26:53.139902 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-01 22:26:53.140277 | orchestrator | Sunday 01 June 2025 22:26:53 +0000 (0:00:06.998) 0:00:27.473 *********** 2025-06-01 22:26:53.538638 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:53.538852 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:53.540007 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:53.540836 | orchestrator | 2025-06-01 22:26:53.541819 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-01 22:26:53.542308 | orchestrator | Sunday 01 June 2025 22:26:53 +0000 (0:00:00.405) 0:00:27.878 *********** 2025-06-01 22:26:57.090786 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-01 22:26:57.091972 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-01 22:26:57.092703 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-01 22:26:57.094677 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-01 22:26:57.095262 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-01 22:26:57.097570 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-01 22:26:57.098220 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-01 22:26:57.099924 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-01 22:26:57.101946 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-01 22:26:57.103538 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-01 22:26:57.104357 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-01 22:26:57.105303 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-01 22:26:57.106200 | orchestrator | 2025-06-01 22:26:57.106959 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 22:26:57.107558 | orchestrator | Sunday 01 June 2025 22:26:57 +0000 (0:00:03.550) 0:00:31.429 *********** 2025-06-01 22:26:58.331721 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:26:58.333991 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:26:58.334219 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:26:58.335539 | orchestrator | 2025-06-01 22:26:58.336807 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:26:58.338090 | orchestrator | 2025-06-01 22:26:58.339145 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:26:58.339743 | orchestrator | Sunday 01 June 2025 22:26:58 +0000 (0:00:01.241) 0:00:32.670 *********** 2025-06-01 22:27:02.182352 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:02.182532 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:02.182617 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:02.183187 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:02.183208 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:02.183515 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:02.183667 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:02.184040 | orchestrator | 2025-06-01 22:27:02.184202 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:27:02.185052 | orchestrator | 2025-06-01 22:27:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:27:02.185077 | orchestrator | 2025-06-01 22:27:02 | INFO  | Please wait and do not abort execution. 2025-06-01 22:27:02.185336 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:27:02.186140 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:27:02.187847 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:27:02.188308 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:27:02.189032 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:27:02.189694 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:27:02.190742 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:27:02.191033 | orchestrator | 2025-06-01 22:27:02.191449 | orchestrator | 2025-06-01 22:27:02.192304 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:27:02.192804 | orchestrator | Sunday 01 June 2025 22:27:02 +0000 (0:00:03.853) 0:00:36.524 *********** 2025-06-01 22:27:02.193690 | orchestrator | =============================================================================== 2025-06-01 22:27:02.193823 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.73s 2025-06-01 22:27:02.194582 | orchestrator | Install required packages (Debian) -------------------------------------- 7.00s 2025-06-01 22:27:02.194717 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.85s 2025-06-01 22:27:02.195455 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2025-06-01 22:27:02.195932 | orchestrator | Create custom facts directory ------------------------------------------- 1.54s 2025-06-01 22:27:02.195951 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.24s 2025-06-01 22:27:02.196370 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2025-06-01 22:27:02.196807 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2025-06-01 22:27:02.197289 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-06-01 22:27:02.197670 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-06-01 22:27:02.197877 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-06-01 22:27:02.198100 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-06-01 22:27:02.198544 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-01 22:27:02.199398 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-06-01 22:27:02.200076 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-06-01 22:27:02.200593 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-06-01 22:27:02.200614 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-06-01 22:27:02.200804 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-01 22:27:02.676296 | orchestrator | + osism apply bootstrap 2025-06-01 22:27:04.319121 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:27:04.319220 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:27:04.319234 | orchestrator | Registering Redlock._release_script 2025-06-01 22:27:04.391778 | orchestrator | 2025-06-01 22:27:04 | INFO  | Task 14107ae5-002f-4ae4-a8f6-bc5565b7f164 (bootstrap) was prepared for execution. 2025-06-01 22:27:04.391899 | orchestrator | 2025-06-01 22:27:04 | INFO  | It takes a moment until task 14107ae5-002f-4ae4-a8f6-bc5565b7f164 (bootstrap) has been started and output is visible here. 2025-06-01 22:27:08.647526 | orchestrator | 2025-06-01 22:27:08.650249 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-01 22:27:08.651503 | orchestrator | 2025-06-01 22:27:08.652868 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-01 22:27:08.654473 | orchestrator | Sunday 01 June 2025 22:27:08 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-01 22:27:08.718702 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:08.743572 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:08.775960 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:08.802325 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:08.883538 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:08.884457 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:08.885265 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:08.886231 | orchestrator | 2025-06-01 22:27:08.886940 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:27:08.887877 | orchestrator | 2025-06-01 22:27:08.888332 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:27:08.889204 | orchestrator | Sunday 01 June 2025 22:27:08 +0000 (0:00:00.241) 0:00:00.408 *********** 2025-06-01 22:27:12.471689 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:12.473387 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:12.474414 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:12.475711 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:12.476629 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:12.477754 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:12.478511 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:12.479319 | orchestrator | 2025-06-01 22:27:12.480319 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-01 22:27:12.481048 | orchestrator | 2025-06-01 22:27:12.481973 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:27:12.482940 | orchestrator | Sunday 01 June 2025 22:27:12 +0000 (0:00:03.586) 0:00:03.994 *********** 2025-06-01 22:27:12.570198 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-01 22:27:12.570319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-01 22:27:12.570414 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-01 22:27:12.617253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 22:27:12.617934 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-01 22:27:12.618289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 22:27:12.618737 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-01 22:27:12.619150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 22:27:12.622685 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-01 22:27:12.622723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-01 22:27:12.665769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 22:27:12.666109 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-01 22:27:12.666491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-01 22:27:12.666856 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-01 22:27:12.670919 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-01 22:27:12.671119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 22:27:12.898824 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:12.902742 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-01 22:27:12.904568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-01 22:27:12.905283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-01 22:27:12.906458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 22:27:12.907826 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:12.908303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-01 22:27:12.910125 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-01 22:27:12.911149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 22:27:12.912023 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-01 22:27:12.912752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-01 22:27:12.914009 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 22:27:12.914715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 22:27:12.915135 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-01 22:27:12.915735 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 22:27:12.916375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 22:27:12.918118 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 22:27:12.918826 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-01 22:27:12.919086 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-01 22:27:12.919689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 22:27:12.920174 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:12.920896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 22:27:12.921186 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:12.925284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-01 22:27:12.925311 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-01 22:27:12.925323 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 22:27:12.925335 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-01 22:27:12.925346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-01 22:27:12.926210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 22:27:12.926237 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-01 22:27:12.926610 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-01 22:27:12.927157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 22:27:12.927440 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-01 22:27:12.928281 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-01 22:27:12.928386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 22:27:12.928869 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:12.929258 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-01 22:27:12.929911 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-01 22:27:12.930098 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:12.930721 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:12.930941 | orchestrator | 2025-06-01 22:27:12.931336 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-01 22:27:12.932151 | orchestrator | 2025-06-01 22:27:12.932896 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-01 22:27:12.933786 | orchestrator | Sunday 01 June 2025 22:27:12 +0000 (0:00:00.426) 0:00:04.420 *********** 2025-06-01 22:27:14.156833 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:14.157823 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:14.159971 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:14.160857 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:14.161508 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:14.162209 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:14.163149 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:14.163639 | orchestrator | 2025-06-01 22:27:14.164170 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-01 22:27:14.165082 | orchestrator | Sunday 01 June 2025 22:27:14 +0000 (0:00:01.259) 0:00:05.680 *********** 2025-06-01 22:27:15.378489 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:15.379305 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:15.380158 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:15.383321 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:15.383906 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:15.385173 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:15.386673 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:15.387940 | orchestrator | 2025-06-01 22:27:15.388778 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-01 22:27:15.389490 | orchestrator | Sunday 01 June 2025 22:27:15 +0000 (0:00:01.217) 0:00:06.898 *********** 2025-06-01 22:27:15.652362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:15.652791 | orchestrator | 2025-06-01 22:27:15.653770 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-01 22:27:15.654505 | orchestrator | Sunday 01 June 2025 22:27:15 +0000 (0:00:00.276) 0:00:07.175 *********** 2025-06-01 22:27:17.589508 | orchestrator | changed: [testbed-manager] 2025-06-01 22:27:17.590759 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:17.591177 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:17.592997 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:17.594500 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:17.595192 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:17.596117 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:17.597135 | orchestrator | 2025-06-01 22:27:17.597841 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-01 22:27:17.598634 | orchestrator | Sunday 01 June 2025 22:27:17 +0000 (0:00:01.935) 0:00:09.110 *********** 2025-06-01 22:27:17.661739 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:17.897117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:17.897318 | orchestrator | 2025-06-01 22:27:17.898152 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-01 22:27:17.899244 | orchestrator | Sunday 01 June 2025 22:27:17 +0000 (0:00:00.308) 0:00:09.419 *********** 2025-06-01 22:27:18.857593 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:18.857693 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:18.857860 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:18.859818 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:18.859901 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:18.862856 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:18.863368 | orchestrator | 2025-06-01 22:27:18.864008 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-01 22:27:18.864659 | orchestrator | Sunday 01 June 2025 22:27:18 +0000 (0:00:00.960) 0:00:10.380 *********** 2025-06-01 22:27:18.905414 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:19.413746 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:19.414468 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:19.415227 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:19.417933 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:19.417955 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:19.417967 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:19.417980 | orchestrator | 2025-06-01 22:27:19.417993 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-01 22:27:19.419154 | orchestrator | Sunday 01 June 2025 22:27:19 +0000 (0:00:00.557) 0:00:10.937 *********** 2025-06-01 22:27:19.512707 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:19.537888 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:19.560497 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:19.865310 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:19.866518 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:19.868932 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:19.869977 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:19.871190 | orchestrator | 2025-06-01 22:27:19.872694 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-01 22:27:19.873736 | orchestrator | Sunday 01 June 2025 22:27:19 +0000 (0:00:00.448) 0:00:11.386 *********** 2025-06-01 22:27:19.940230 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:19.970326 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:19.997621 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:20.024852 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:20.078616 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:20.079487 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:20.080540 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:20.080738 | orchestrator | 2025-06-01 22:27:20.081184 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-01 22:27:20.081635 | orchestrator | Sunday 01 June 2025 22:27:20 +0000 (0:00:00.216) 0:00:11.602 *********** 2025-06-01 22:27:20.384171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:20.384320 | orchestrator | 2025-06-01 22:27:20.384336 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-01 22:27:20.384408 | orchestrator | Sunday 01 June 2025 22:27:20 +0000 (0:00:00.303) 0:00:11.905 *********** 2025-06-01 22:27:20.730266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:20.731958 | orchestrator | 2025-06-01 22:27:20.732293 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-01 22:27:20.733147 | orchestrator | Sunday 01 June 2025 22:27:20 +0000 (0:00:00.345) 0:00:12.251 *********** 2025-06-01 22:27:21.954130 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:21.954675 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:21.955329 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:21.956372 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:21.957206 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:21.958117 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:21.958796 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:21.959268 | orchestrator | 2025-06-01 22:27:21.960705 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-01 22:27:21.960728 | orchestrator | Sunday 01 June 2025 22:27:21 +0000 (0:00:01.225) 0:00:13.476 *********** 2025-06-01 22:27:22.033290 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:22.065760 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:22.087542 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:22.117193 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:22.193848 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:22.194957 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:22.195363 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:22.196286 | orchestrator | 2025-06-01 22:27:22.198081 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-01 22:27:22.199017 | orchestrator | Sunday 01 June 2025 22:27:22 +0000 (0:00:00.241) 0:00:13.717 *********** 2025-06-01 22:27:22.741205 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:22.741418 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:22.741993 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:22.745703 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:22.745743 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:22.745926 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:22.746297 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:22.746823 | orchestrator | 2025-06-01 22:27:22.747513 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-01 22:27:22.747578 | orchestrator | Sunday 01 June 2025 22:27:22 +0000 (0:00:00.545) 0:00:14.263 *********** 2025-06-01 22:27:22.828523 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:22.852196 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:22.878962 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:22.905927 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:22.974278 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:22.974391 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:22.974748 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:22.975063 | orchestrator | 2025-06-01 22:27:22.975473 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-01 22:27:22.975807 | orchestrator | Sunday 01 June 2025 22:27:22 +0000 (0:00:00.235) 0:00:14.498 *********** 2025-06-01 22:27:23.550870 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:23.552019 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:23.553894 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:23.553978 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:23.556847 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:23.557211 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:23.557688 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:23.558150 | orchestrator | 2025-06-01 22:27:23.558821 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-01 22:27:23.559053 | orchestrator | Sunday 01 June 2025 22:27:23 +0000 (0:00:00.576) 0:00:15.075 *********** 2025-06-01 22:27:24.614275 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:24.615337 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:24.616704 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:24.616728 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:24.617318 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:24.617693 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:24.618521 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:24.619444 | orchestrator | 2025-06-01 22:27:24.619900 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-01 22:27:24.620749 | orchestrator | Sunday 01 June 2025 22:27:24 +0000 (0:00:01.061) 0:00:16.136 *********** 2025-06-01 22:27:25.714620 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:25.714721 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:25.714735 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:25.714803 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:25.715010 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:25.715513 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:25.715703 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:25.715884 | orchestrator | 2025-06-01 22:27:25.716976 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-01 22:27:25.717139 | orchestrator | Sunday 01 June 2025 22:27:25 +0000 (0:00:01.100) 0:00:17.237 *********** 2025-06-01 22:27:26.128588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:26.128724 | orchestrator | 2025-06-01 22:27:26.131905 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-01 22:27:26.131954 | orchestrator | Sunday 01 June 2025 22:27:26 +0000 (0:00:00.413) 0:00:17.650 *********** 2025-06-01 22:27:26.204727 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:27.352237 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:27.353170 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:27.354473 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:27.355249 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:27.356148 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:27.357050 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:27.357760 | orchestrator | 2025-06-01 22:27:27.358212 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-01 22:27:27.358915 | orchestrator | Sunday 01 June 2025 22:27:27 +0000 (0:00:01.223) 0:00:18.874 *********** 2025-06-01 22:27:27.431966 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:27.460018 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:27.486363 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:27.514464 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:27.583197 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:27.586188 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:27.587824 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:27.588918 | orchestrator | 2025-06-01 22:27:27.590002 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-01 22:27:27.591147 | orchestrator | Sunday 01 June 2025 22:27:27 +0000 (0:00:00.232) 0:00:19.106 *********** 2025-06-01 22:27:27.703232 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:27.731798 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:27.768480 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:27.852495 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:27.853891 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:27.856149 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:27.857188 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:27.858966 | orchestrator | 2025-06-01 22:27:27.860098 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-01 22:27:27.861326 | orchestrator | Sunday 01 June 2025 22:27:27 +0000 (0:00:00.269) 0:00:19.375 *********** 2025-06-01 22:27:27.943778 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:27.986314 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:28.017721 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:28.043249 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:28.105335 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:28.105882 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:28.106903 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:28.106926 | orchestrator | 2025-06-01 22:27:28.108042 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-01 22:27:28.108827 | orchestrator | Sunday 01 June 2025 22:27:28 +0000 (0:00:00.254) 0:00:19.629 *********** 2025-06-01 22:27:28.399414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:28.400454 | orchestrator | 2025-06-01 22:27:28.400494 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-01 22:27:28.401774 | orchestrator | Sunday 01 June 2025 22:27:28 +0000 (0:00:00.293) 0:00:19.923 *********** 2025-06-01 22:27:28.932225 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:28.932414 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:28.933172 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:28.934787 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:28.935804 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:28.937021 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:28.938226 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:28.939086 | orchestrator | 2025-06-01 22:27:28.940023 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-01 22:27:28.940720 | orchestrator | Sunday 01 June 2025 22:27:28 +0000 (0:00:00.529) 0:00:20.453 *********** 2025-06-01 22:27:29.025344 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:29.051968 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:29.075411 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:29.100570 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:29.185329 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:29.185865 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:29.186343 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:29.187080 | orchestrator | 2025-06-01 22:27:29.189969 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-01 22:27:29.189994 | orchestrator | Sunday 01 June 2025 22:27:29 +0000 (0:00:00.255) 0:00:20.708 *********** 2025-06-01 22:27:30.294250 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:30.295105 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:30.296610 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:30.298242 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:30.299569 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:30.300947 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:30.302382 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:30.303895 | orchestrator | 2025-06-01 22:27:30.305496 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-01 22:27:30.305537 | orchestrator | Sunday 01 June 2025 22:27:30 +0000 (0:00:01.107) 0:00:21.816 *********** 2025-06-01 22:27:30.872195 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:30.872950 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:30.875389 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:30.876138 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:30.876847 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:30.877503 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:30.878061 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:30.878838 | orchestrator | 2025-06-01 22:27:30.879295 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-01 22:27:30.880025 | orchestrator | Sunday 01 June 2025 22:27:30 +0000 (0:00:00.578) 0:00:22.394 *********** 2025-06-01 22:27:31.982172 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:31.983223 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:31.983552 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:31.985195 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:31.986640 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:31.987303 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:31.988666 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:31.989672 | orchestrator | 2025-06-01 22:27:31.991016 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-01 22:27:31.991651 | orchestrator | Sunday 01 June 2025 22:27:31 +0000 (0:00:01.107) 0:00:23.502 *********** 2025-06-01 22:27:45.063217 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:45.063338 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:45.063354 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:45.063366 | orchestrator | changed: [testbed-manager] 2025-06-01 22:27:45.063379 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:45.063812 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:45.065537 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:45.067134 | orchestrator | 2025-06-01 22:27:45.068012 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-01 22:27:45.068964 | orchestrator | Sunday 01 June 2025 22:27:45 +0000 (0:00:13.078) 0:00:36.581 *********** 2025-06-01 22:27:45.134165 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:45.160790 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:45.190341 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:45.218818 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:45.289949 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:45.290471 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:45.292942 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:45.293218 | orchestrator | 2025-06-01 22:27:45.295135 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-01 22:27:45.295160 | orchestrator | Sunday 01 June 2025 22:27:45 +0000 (0:00:00.232) 0:00:36.813 *********** 2025-06-01 22:27:45.390198 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:45.415125 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:45.454904 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:45.530250 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:45.530747 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:45.532306 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:45.533718 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:45.534217 | orchestrator | 2025-06-01 22:27:45.535418 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-01 22:27:45.535781 | orchestrator | Sunday 01 June 2025 22:27:45 +0000 (0:00:00.240) 0:00:37.054 *********** 2025-06-01 22:27:45.610214 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:45.642195 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:45.666762 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:45.691731 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:45.763118 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:45.763555 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:45.763770 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:45.764691 | orchestrator | 2025-06-01 22:27:45.764721 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-01 22:27:45.765012 | orchestrator | Sunday 01 June 2025 22:27:45 +0000 (0:00:00.233) 0:00:37.287 *********** 2025-06-01 22:27:46.059908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:46.060325 | orchestrator | 2025-06-01 22:27:46.064171 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-01 22:27:46.064218 | orchestrator | Sunday 01 June 2025 22:27:46 +0000 (0:00:00.295) 0:00:37.582 *********** 2025-06-01 22:27:47.678807 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:47.679711 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:47.682900 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:47.682933 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:47.682946 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:47.683998 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:47.685186 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:47.685980 | orchestrator | 2025-06-01 22:27:47.687447 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-01 22:27:47.688213 | orchestrator | Sunday 01 June 2025 22:27:47 +0000 (0:00:01.618) 0:00:39.201 *********** 2025-06-01 22:27:48.739770 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:48.740340 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:48.742112 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:48.742211 | orchestrator | changed: [testbed-manager] 2025-06-01 22:27:48.744300 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:48.744552 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:48.745502 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:48.746852 | orchestrator | 2025-06-01 22:27:48.747757 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-01 22:27:48.748076 | orchestrator | Sunday 01 June 2025 22:27:48 +0000 (0:00:01.060) 0:00:40.262 *********** 2025-06-01 22:27:49.545456 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:27:49.549925 | orchestrator | ok: [testbed-manager] 2025-06-01 22:27:49.549956 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:27:49.550217 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:27:49.550237 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:27:49.550249 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:27:49.550989 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:27:49.551346 | orchestrator | 2025-06-01 22:27:49.552063 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-01 22:27:49.552699 | orchestrator | Sunday 01 June 2025 22:27:49 +0000 (0:00:00.804) 0:00:41.067 *********** 2025-06-01 22:27:49.870843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:27:49.872070 | orchestrator | 2025-06-01 22:27:49.873366 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-01 22:27:49.875070 | orchestrator | Sunday 01 June 2025 22:27:49 +0000 (0:00:00.327) 0:00:41.394 *********** 2025-06-01 22:27:50.875192 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:27:50.883827 | orchestrator | changed: [testbed-manager] 2025-06-01 22:27:50.883885 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:27:50.888063 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:27:50.888148 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:27:50.888739 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:27:50.888774 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:27:50.889125 | orchestrator | 2025-06-01 22:27:50.889783 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-01 22:27:50.891227 | orchestrator | Sunday 01 June 2025 22:27:50 +0000 (0:00:01.002) 0:00:42.397 *********** 2025-06-01 22:27:50.980593 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:27:51.011658 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:27:51.056646 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:27:51.203102 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:27:51.204615 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:27:51.208012 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:27:51.208066 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:27:51.208079 | orchestrator | 2025-06-01 22:27:51.208545 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-01 22:27:51.210519 | orchestrator | Sunday 01 June 2025 22:27:51 +0000 (0:00:00.328) 0:00:42.726 *********** 2025-06-01 22:28:03.473551 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:03.473673 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:03.473690 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:03.473761 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:03.473968 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:03.476203 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:03.477516 | orchestrator | changed: [testbed-manager] 2025-06-01 22:28:03.478611 | orchestrator | 2025-06-01 22:28:03.479243 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-01 22:28:03.480308 | orchestrator | Sunday 01 June 2025 22:28:03 +0000 (0:00:12.264) 0:00:54.991 *********** 2025-06-01 22:28:04.666666 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:04.667570 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:04.670219 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:04.670710 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:04.672131 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:04.672279 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:04.672805 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:04.673245 | orchestrator | 2025-06-01 22:28:04.673879 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-01 22:28:04.675509 | orchestrator | Sunday 01 June 2025 22:28:04 +0000 (0:00:01.193) 0:00:56.185 *********** 2025-06-01 22:28:05.562647 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:05.563649 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:05.565017 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:05.566086 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:05.567172 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:05.568397 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:05.569113 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:05.570618 | orchestrator | 2025-06-01 22:28:05.571317 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-01 22:28:05.572190 | orchestrator | Sunday 01 June 2025 22:28:05 +0000 (0:00:00.898) 0:00:57.083 *********** 2025-06-01 22:28:05.645032 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:05.669461 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:05.697601 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:05.728459 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:05.787498 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:05.788884 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:05.790081 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:05.790108 | orchestrator | 2025-06-01 22:28:05.790767 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-01 22:28:05.791443 | orchestrator | Sunday 01 June 2025 22:28:05 +0000 (0:00:00.227) 0:00:57.311 *********** 2025-06-01 22:28:05.869082 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:05.899129 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:05.926205 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:05.954202 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:06.015609 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:06.016161 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:06.017154 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:06.017948 | orchestrator | 2025-06-01 22:28:06.019866 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-01 22:28:06.020633 | orchestrator | Sunday 01 June 2025 22:28:06 +0000 (0:00:00.227) 0:00:57.539 *********** 2025-06-01 22:28:06.349150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:28:06.350004 | orchestrator | 2025-06-01 22:28:06.352925 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-01 22:28:06.352960 | orchestrator | Sunday 01 June 2025 22:28:06 +0000 (0:00:00.333) 0:00:57.872 *********** 2025-06-01 22:28:08.040632 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:08.041155 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:08.042062 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:08.043155 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:08.045599 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:08.045817 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:08.046708 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:08.047166 | orchestrator | 2025-06-01 22:28:08.048250 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-01 22:28:08.048660 | orchestrator | Sunday 01 June 2025 22:28:08 +0000 (0:00:01.689) 0:00:59.561 *********** 2025-06-01 22:28:08.677974 | orchestrator | changed: [testbed-manager] 2025-06-01 22:28:08.678209 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:08.679587 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:08.682518 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:08.684810 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:08.684835 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:08.684847 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:08.685588 | orchestrator | 2025-06-01 22:28:08.686260 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-01 22:28:08.687285 | orchestrator | Sunday 01 June 2025 22:28:08 +0000 (0:00:00.639) 0:01:00.201 *********** 2025-06-01 22:28:08.770683 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:08.809972 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:08.840260 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:08.878014 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:08.951141 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:08.951206 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:08.951798 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:08.953652 | orchestrator | 2025-06-01 22:28:08.955823 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-01 22:28:08.956901 | orchestrator | Sunday 01 June 2025 22:28:08 +0000 (0:00:00.272) 0:01:00.474 *********** 2025-06-01 22:28:10.001316 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:10.002643 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:10.004705 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:10.005538 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:10.007211 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:10.009282 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:10.010163 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:10.011652 | orchestrator | 2025-06-01 22:28:10.013112 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-01 22:28:10.013745 | orchestrator | Sunday 01 June 2025 22:28:09 +0000 (0:00:01.048) 0:01:01.522 *********** 2025-06-01 22:28:11.450433 | orchestrator | changed: [testbed-manager] 2025-06-01 22:28:11.450703 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:28:11.452841 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:28:11.452902 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:28:11.453663 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:28:11.455346 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:28:11.456075 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:28:11.456759 | orchestrator | 2025-06-01 22:28:11.457269 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-01 22:28:11.458152 | orchestrator | Sunday 01 June 2025 22:28:11 +0000 (0:00:01.449) 0:01:02.972 *********** 2025-06-01 22:28:13.655061 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:13.655263 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:13.657146 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:13.658853 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:13.660025 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:13.660742 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:13.661705 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:13.662754 | orchestrator | 2025-06-01 22:28:13.663790 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-01 22:28:13.664437 | orchestrator | Sunday 01 June 2025 22:28:13 +0000 (0:00:02.204) 0:01:05.177 *********** 2025-06-01 22:28:50.070435 | orchestrator | ok: [testbed-manager] 2025-06-01 22:28:50.070726 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:28:50.070787 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:28:50.071905 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:28:50.072838 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:28:50.074775 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:28:50.075854 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:28:50.077209 | orchestrator | 2025-06-01 22:28:50.077485 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-01 22:28:50.078474 | orchestrator | Sunday 01 June 2025 22:28:50 +0000 (0:00:36.414) 0:01:41.591 *********** 2025-06-01 22:30:04.399112 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:04.399263 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:04.399346 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:04.401151 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:04.402637 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:04.403888 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:04.405599 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:04.407152 | orchestrator | 2025-06-01 22:30:04.408654 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-01 22:30:04.409601 | orchestrator | Sunday 01 June 2025 22:30:04 +0000 (0:01:14.327) 0:02:55.918 *********** 2025-06-01 22:30:05.968360 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:05.969719 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:05.970792 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:05.971508 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:05.972545 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:05.972705 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:05.973393 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:05.973813 | orchestrator | 2025-06-01 22:30:05.975089 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-01 22:30:05.975401 | orchestrator | Sunday 01 June 2025 22:30:05 +0000 (0:00:01.572) 0:02:57.490 *********** 2025-06-01 22:30:18.982920 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:18.983221 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:18.983247 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:18.983259 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:18.983770 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:18.985628 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:18.986076 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:18.986786 | orchestrator | 2025-06-01 22:30:18.987715 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-01 22:30:18.988289 | orchestrator | Sunday 01 June 2025 22:30:18 +0000 (0:00:13.012) 0:03:10.503 *********** 2025-06-01 22:30:19.388393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-01 22:30:19.389711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-01 22:30:19.390627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-01 22:30:19.391368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-01 22:30:19.392369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-01 22:30:19.393423 | orchestrator | 2025-06-01 22:30:19.394269 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-01 22:30:19.394960 | orchestrator | Sunday 01 June 2025 22:30:19 +0000 (0:00:00.408) 0:03:10.911 *********** 2025-06-01 22:30:19.456149 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:30:19.499795 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:19.501181 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:30:19.533271 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:30:19.534461 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:30:19.573165 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:30:19.573226 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-01 22:30:19.599406 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:30:21.111405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:30:21.111895 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:30:21.113635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 22:30:21.114506 | orchestrator | 2025-06-01 22:30:21.115061 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-01 22:30:21.116785 | orchestrator | Sunday 01 June 2025 22:30:21 +0000 (0:00:01.721) 0:03:12.633 *********** 2025-06-01 22:30:21.185735 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:30:21.185817 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:30:21.185830 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:30:21.186287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:30:21.186497 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:30:21.188353 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:30:21.188964 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:30:21.189645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:30:21.191098 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:30:21.191804 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:30:21.194512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:30:21.235831 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:30:21.237235 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:30:21.238876 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:30:21.239719 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:30:21.240040 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:30:21.240900 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:30:21.241432 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:30:21.241918 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:30:21.242804 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:30:21.243397 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:30:21.243618 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:30:21.244968 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:30:21.245040 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:30:21.276373 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:30:21.277431 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:21.278588 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:30:21.279836 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:30:21.280467 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:30:21.280774 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:30:21.281604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:30:21.281753 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-01 22:30:21.282161 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-01 22:30:21.328256 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:30:21.329259 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-01 22:30:21.330859 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-01 22:30:21.332508 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-01 22:30:21.333204 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-01 22:30:21.333980 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-01 22:30:21.334450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-01 22:30:21.335158 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-01 22:30:21.335400 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-01 22:30:21.355730 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:30:25.777132 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:30:25.777911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 22:30:25.779845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 22:30:25.780683 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-01 22:30:25.782160 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 22:30:25.782289 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 22:30:25.783209 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 22:30:25.783836 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 22:30:25.784182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 22:30:25.785068 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 22:30:25.785849 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 22:30:25.786205 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-01 22:30:25.786785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 22:30:25.787580 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 22:30:25.787830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-01 22:30:25.789072 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 22:30:25.789651 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 22:30:25.790383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 22:30:25.790498 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-01 22:30:25.790957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 22:30:25.791535 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 22:30:25.791758 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-01 22:30:25.792263 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 22:30:25.792730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 22:30:25.793400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-01 22:30:25.794143 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 22:30:25.794934 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 22:30:25.795346 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-01 22:30:25.795777 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-01 22:30:25.796361 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-01 22:30:25.796748 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-01 22:30:25.797151 | orchestrator | 2025-06-01 22:30:25.797792 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-01 22:30:25.798131 | orchestrator | Sunday 01 June 2025 22:30:25 +0000 (0:00:04.664) 0:03:17.297 *********** 2025-06-01 22:30:26.364647 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.365829 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.365932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.368349 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.369801 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.372637 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.372691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-01 22:30:26.373637 | orchestrator | 2025-06-01 22:30:26.375013 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-01 22:30:26.375969 | orchestrator | Sunday 01 June 2025 22:30:26 +0000 (0:00:00.590) 0:03:17.887 *********** 2025-06-01 22:30:26.433019 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:30:26.463015 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:26.548876 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:30:26.865199 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:30:26.865852 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:30:26.867015 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:30:26.868260 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-01 22:30:26.869272 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:30:26.869929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 22:30:26.870955 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 22:30:26.871443 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-01 22:30:26.872298 | orchestrator | 2025-06-01 22:30:26.872997 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-01 22:30:26.873434 | orchestrator | Sunday 01 June 2025 22:30:26 +0000 (0:00:00.500) 0:03:18.387 *********** 2025-06-01 22:30:26.922189 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:30:26.952615 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:27.041625 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:30:27.041782 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:30:27.438944 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:30:27.440433 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:30:27.441638 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-01 22:30:27.443319 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:30:27.444281 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 22:30:27.444869 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 22:30:27.445866 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-01 22:30:27.446597 | orchestrator | 2025-06-01 22:30:27.447256 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-01 22:30:27.448581 | orchestrator | Sunday 01 June 2025 22:30:27 +0000 (0:00:00.574) 0:03:18.962 *********** 2025-06-01 22:30:27.545965 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:27.572491 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:30:27.600684 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:30:27.628512 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:30:27.770274 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:30:27.770895 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:30:27.775022 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:30:27.776235 | orchestrator | 2025-06-01 22:30:27.777363 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-01 22:30:27.778538 | orchestrator | Sunday 01 June 2025 22:30:27 +0000 (0:00:00.331) 0:03:19.293 *********** 2025-06-01 22:30:33.349488 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:33.349696 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:33.353228 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:33.353262 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:33.355229 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:33.355323 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:33.355338 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:33.355351 | orchestrator | 2025-06-01 22:30:33.355364 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-01 22:30:33.355945 | orchestrator | Sunday 01 June 2025 22:30:33 +0000 (0:00:05.578) 0:03:24.872 *********** 2025-06-01 22:30:33.446330 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-01 22:30:33.447420 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-01 22:30:33.492808 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:33.493450 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-01 22:30:33.533038 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:30:33.533819 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-01 22:30:33.598751 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:30:33.600231 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-01 22:30:33.643319 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:30:33.644516 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-01 22:30:33.729213 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:30:33.729526 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:30:33.730288 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-01 22:30:33.731720 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:30:33.733541 | orchestrator | 2025-06-01 22:30:33.734766 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-01 22:30:33.735857 | orchestrator | Sunday 01 June 2025 22:30:33 +0000 (0:00:00.381) 0:03:25.253 *********** 2025-06-01 22:30:34.800845 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-01 22:30:34.801297 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-01 22:30:34.802070 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-01 22:30:34.804495 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-01 22:30:34.804567 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-01 22:30:34.805832 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-01 22:30:34.806543 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-01 22:30:34.807157 | orchestrator | 2025-06-01 22:30:34.807726 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-01 22:30:34.810499 | orchestrator | Sunday 01 June 2025 22:30:34 +0000 (0:00:01.069) 0:03:26.322 *********** 2025-06-01 22:30:35.395456 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:30:35.399992 | orchestrator | 2025-06-01 22:30:35.400923 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-01 22:30:35.401766 | orchestrator | Sunday 01 June 2025 22:30:35 +0000 (0:00:00.595) 0:03:26.917 *********** 2025-06-01 22:30:36.528520 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:36.528678 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:36.529187 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:36.529429 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:36.529871 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:36.530250 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:36.530486 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:36.530819 | orchestrator | 2025-06-01 22:30:36.531329 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-01 22:30:36.531769 | orchestrator | Sunday 01 June 2025 22:30:36 +0000 (0:00:01.133) 0:03:28.051 *********** 2025-06-01 22:30:37.183612 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:37.185729 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:37.188873 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:37.189742 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:37.192496 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:37.197312 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:37.197424 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:37.202712 | orchestrator | 2025-06-01 22:30:37.203205 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-01 22:30:37.205288 | orchestrator | Sunday 01 June 2025 22:30:37 +0000 (0:00:00.652) 0:03:28.703 *********** 2025-06-01 22:30:37.923563 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:37.924259 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:37.924395 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:37.925146 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:37.925578 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:37.926234 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:37.928487 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:37.928716 | orchestrator | 2025-06-01 22:30:37.929950 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-01 22:30:37.931292 | orchestrator | Sunday 01 June 2025 22:30:37 +0000 (0:00:00.739) 0:03:29.443 *********** 2025-06-01 22:30:38.625392 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:38.627648 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:38.627693 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:38.627706 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:38.628516 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:38.629551 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:38.630717 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:38.631220 | orchestrator | 2025-06-01 22:30:38.631841 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-01 22:30:38.632760 | orchestrator | Sunday 01 June 2025 22:30:38 +0000 (0:00:00.702) 0:03:30.146 *********** 2025-06-01 22:30:39.565902 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815720.3258219, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.566319 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815779.0358405, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.567464 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815780.8472157, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.570146 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815776.1935263, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.570381 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815776.5076506, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.570851 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815769.3532493, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.571783 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748815783.1319265, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.572482 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815752.7566273, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.573574 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815675.0044231, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.574753 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815676.8299563, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.575324 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815680.1721756, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.576413 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815675.4504125, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.577341 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815681.3405209, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.577938 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748815669.5775485, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-01 22:30:39.578536 | orchestrator | 2025-06-01 22:30:39.579261 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-01 22:30:39.579561 | orchestrator | Sunday 01 June 2025 22:30:39 +0000 (0:00:00.941) 0:03:31.087 *********** 2025-06-01 22:30:40.734102 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:40.734281 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:40.734913 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:40.736439 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:40.736503 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:40.737812 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:40.737833 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:40.738511 | orchestrator | 2025-06-01 22:30:40.739331 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-01 22:30:40.740063 | orchestrator | Sunday 01 June 2025 22:30:40 +0000 (0:00:01.169) 0:03:32.257 *********** 2025-06-01 22:30:41.901325 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:41.901422 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:41.902158 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:41.903215 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:41.906004 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:41.907895 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:41.909541 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:41.911132 | orchestrator | 2025-06-01 22:30:41.912142 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-01 22:30:41.913926 | orchestrator | Sunday 01 June 2025 22:30:41 +0000 (0:00:01.164) 0:03:33.421 *********** 2025-06-01 22:30:43.039275 | orchestrator | changed: [testbed-manager] 2025-06-01 22:30:43.040139 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:43.044710 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:43.044813 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:43.044828 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:43.044840 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:43.044851 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:43.045136 | orchestrator | 2025-06-01 22:30:43.046061 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-01 22:30:43.046604 | orchestrator | Sunday 01 June 2025 22:30:43 +0000 (0:00:01.140) 0:03:34.562 *********** 2025-06-01 22:30:43.141678 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:30:43.193500 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:30:43.254476 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:30:43.301933 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:30:43.364430 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:30:43.368428 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:30:43.368532 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:30:43.368550 | orchestrator | 2025-06-01 22:30:43.368562 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-01 22:30:43.368573 | orchestrator | Sunday 01 June 2025 22:30:43 +0000 (0:00:00.324) 0:03:34.887 *********** 2025-06-01 22:30:44.118121 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:44.121578 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:44.121638 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:44.121650 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:44.123220 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:44.123721 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:44.124996 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:44.125585 | orchestrator | 2025-06-01 22:30:44.126637 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-01 22:30:44.127060 | orchestrator | Sunday 01 June 2025 22:30:44 +0000 (0:00:00.752) 0:03:35.639 *********** 2025-06-01 22:30:44.580687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:30:44.583733 | orchestrator | 2025-06-01 22:30:44.583765 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-01 22:30:44.583779 | orchestrator | Sunday 01 June 2025 22:30:44 +0000 (0:00:00.462) 0:03:36.102 *********** 2025-06-01 22:30:51.640769 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:51.640889 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:30:51.642581 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:30:51.643389 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:30:51.643879 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:30:51.644528 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:30:51.645551 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:30:51.646675 | orchestrator | 2025-06-01 22:30:51.647254 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-01 22:30:51.647775 | orchestrator | Sunday 01 June 2025 22:30:51 +0000 (0:00:07.057) 0:03:43.160 *********** 2025-06-01 22:30:52.860478 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:52.861469 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:52.862288 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:52.863442 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:52.863534 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:52.864781 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:52.865689 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:52.866360 | orchestrator | 2025-06-01 22:30:52.867234 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-01 22:30:52.867829 | orchestrator | Sunday 01 June 2025 22:30:52 +0000 (0:00:01.222) 0:03:44.382 *********** 2025-06-01 22:30:53.928253 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:30:53.928701 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:30:53.932561 | orchestrator | ok: [testbed-manager] 2025-06-01 22:30:53.932593 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:30:53.932998 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:30:53.934256 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:30:53.934964 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:30:53.936211 | orchestrator | 2025-06-01 22:30:53.937023 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-01 22:30:53.937377 | orchestrator | Sunday 01 June 2025 22:30:53 +0000 (0:00:01.068) 0:03:45.450 *********** 2025-06-01 22:30:54.499399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:30:54.502598 | orchestrator | 2025-06-01 22:30:54.502634 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-01 22:30:54.503299 | orchestrator | Sunday 01 June 2025 22:30:54 +0000 (0:00:00.570) 0:03:46.020 *********** 2025-06-01 22:31:03.545262 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:31:03.546002 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:31:03.547211 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:31:03.549277 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:31:03.550168 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:31:03.550712 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:31:03.551957 | orchestrator | changed: [testbed-manager] 2025-06-01 22:31:03.553002 | orchestrator | 2025-06-01 22:31:03.553981 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-01 22:31:03.554534 | orchestrator | Sunday 01 June 2025 22:31:03 +0000 (0:00:09.046) 0:03:55.067 *********** 2025-06-01 22:31:04.165544 | orchestrator | changed: [testbed-manager] 2025-06-01 22:31:04.166444 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:31:04.167731 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:31:04.171403 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:31:04.172831 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:31:04.173088 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:31:04.174416 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:31:04.177198 | orchestrator | 2025-06-01 22:31:04.177748 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-01 22:31:04.178466 | orchestrator | Sunday 01 June 2025 22:31:04 +0000 (0:00:00.620) 0:03:55.688 *********** 2025-06-01 22:31:05.324258 | orchestrator | changed: [testbed-manager] 2025-06-01 22:31:05.324364 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:31:05.326072 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:31:05.328751 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:31:05.329610 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:31:05.330996 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:31:05.332672 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:31:05.333503 | orchestrator | 2025-06-01 22:31:05.334914 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-01 22:31:05.336190 | orchestrator | Sunday 01 June 2025 22:31:05 +0000 (0:00:01.156) 0:03:56.845 *********** 2025-06-01 22:31:06.375966 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:31:06.377344 | orchestrator | changed: [testbed-manager] 2025-06-01 22:31:06.378426 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:31:06.379338 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:31:06.380411 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:31:06.381033 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:31:06.382305 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:31:06.383048 | orchestrator | 2025-06-01 22:31:06.383962 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-01 22:31:06.384721 | orchestrator | Sunday 01 June 2025 22:31:06 +0000 (0:00:01.054) 0:03:57.899 *********** 2025-06-01 22:31:06.464646 | orchestrator | ok: [testbed-manager] 2025-06-01 22:31:06.503380 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:31:06.588192 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:31:06.631599 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:31:06.710611 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:31:06.710759 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:31:06.711777 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:31:06.712637 | orchestrator | 2025-06-01 22:31:06.713657 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-01 22:31:06.714458 | orchestrator | Sunday 01 June 2025 22:31:06 +0000 (0:00:00.335) 0:03:58.235 *********** 2025-06-01 22:31:06.845834 | orchestrator | ok: [testbed-manager] 2025-06-01 22:31:06.889386 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:31:06.927288 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:31:06.979609 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:31:07.079860 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:31:07.080521 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:31:07.081620 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:31:07.082113 | orchestrator | 2025-06-01 22:31:07.082980 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-01 22:31:07.083268 | orchestrator | Sunday 01 June 2025 22:31:07 +0000 (0:00:00.367) 0:03:58.602 *********** 2025-06-01 22:31:07.197932 | orchestrator | ok: [testbed-manager] 2025-06-01 22:31:07.233451 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:31:07.275374 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:31:07.337238 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:31:07.460294 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:31:07.461500 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:31:07.462176 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:31:07.466415 | orchestrator | 2025-06-01 22:31:07.466450 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-01 22:31:07.468223 | orchestrator | Sunday 01 June 2025 22:31:07 +0000 (0:00:00.379) 0:03:58.982 *********** 2025-06-01 22:31:13.040131 | orchestrator | ok: [testbed-manager] 2025-06-01 22:31:13.040242 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:31:13.040257 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:31:13.040670 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:31:13.041135 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:31:13.042131 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:31:13.042622 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:31:13.043140 | orchestrator | 2025-06-01 22:31:13.046095 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-01 22:31:13.046972 | orchestrator | Sunday 01 June 2025 22:31:13 +0000 (0:00:05.580) 0:04:04.562 *********** 2025-06-01 22:31:13.458389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:31:13.459347 | orchestrator | 2025-06-01 22:31:13.462573 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-01 22:31:13.463348 | orchestrator | Sunday 01 June 2025 22:31:13 +0000 (0:00:00.419) 0:04:04.981 *********** 2025-06-01 22:31:13.530705 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.531502 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-01 22:31:13.588514 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:31:13.589848 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.590563 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-01 22:31:13.641829 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.643290 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:31:13.646470 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-01 22:31:13.647810 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.649257 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-01 22:31:13.680664 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:31:13.743033 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:31:13.743216 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.743237 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-01 22:31:13.744140 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.749268 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-01 22:31:13.832593 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:31:13.833676 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:31:13.834169 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-01 22:31:13.835400 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-01 22:31:13.838047 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:31:13.838066 | orchestrator | 2025-06-01 22:31:13.838422 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-01 22:31:13.839339 | orchestrator | Sunday 01 June 2025 22:31:13 +0000 (0:00:00.374) 0:04:05.356 *********** 2025-06-01 22:31:14.249845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:31:14.250200 | orchestrator | 2025-06-01 22:31:14.251597 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-01 22:31:14.255372 | orchestrator | Sunday 01 June 2025 22:31:14 +0000 (0:00:00.416) 0:04:05.773 *********** 2025-06-01 22:31:14.366824 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-01 22:31:14.367013 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-01 22:31:14.367746 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-01 22:31:14.404406 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:31:14.405342 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-01 22:31:14.441053 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:31:14.441673 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-01 22:31:14.491555 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:31:14.567066 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-01 22:31:14.567328 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:31:14.570597 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:31:14.571532 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:31:14.572834 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-01 22:31:14.573790 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:31:14.574831 | orchestrator | 2025-06-01 22:31:14.575418 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-01 22:31:14.576409 | orchestrator | Sunday 01 June 2025 22:31:14 +0000 (0:00:00.316) 0:04:06.089 *********** 2025-06-01 22:31:15.150254 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:31:15.150861 | orchestrator | 2025-06-01 22:31:15.151828 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-01 22:31:15.153095 | orchestrator | Sunday 01 June 2025 22:31:15 +0000 (0:00:00.582) 0:04:06.672 *********** 2025-06-01 22:31:47.754214 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:31:47.754337 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:31:47.754353 | orchestrator | changed: [testbed-manager] 2025-06-01 22:31:47.754365 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:31:47.754376 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:31:47.754387 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:31:47.754463 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:31:47.754787 | orchestrator | 2025-06-01 22:31:47.756119 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-01 22:31:47.757846 | orchestrator | Sunday 01 June 2025 22:31:47 +0000 (0:00:32.601) 0:04:39.274 *********** 2025-06-01 22:31:55.159747 | orchestrator | changed: [testbed-manager] 2025-06-01 22:31:55.160206 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:31:55.161868 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:31:55.163190 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:31:55.165397 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:31:55.166242 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:31:55.167246 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:31:55.168925 | orchestrator | 2025-06-01 22:31:55.169686 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-01 22:31:55.170182 | orchestrator | Sunday 01 June 2025 22:31:55 +0000 (0:00:07.406) 0:04:46.681 *********** 2025-06-01 22:32:02.208278 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:02.208403 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:02.210217 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:02.210527 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:02.212750 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:02.213302 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:02.214253 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:02.214741 | orchestrator | 2025-06-01 22:32:02.215496 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-01 22:32:02.216010 | orchestrator | Sunday 01 June 2025 22:32:02 +0000 (0:00:07.048) 0:04:53.729 *********** 2025-06-01 22:32:03.716030 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:03.716138 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:03.716218 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:03.716749 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:03.718224 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:03.719732 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:03.720203 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:03.724029 | orchestrator | 2025-06-01 22:32:03.724487 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-01 22:32:03.727113 | orchestrator | Sunday 01 June 2025 22:32:03 +0000 (0:00:01.509) 0:04:55.239 *********** 2025-06-01 22:32:09.139839 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:09.139957 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:09.140674 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:09.140700 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:09.142466 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:09.144636 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:09.145482 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:09.147043 | orchestrator | 2025-06-01 22:32:09.148311 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-01 22:32:09.149188 | orchestrator | Sunday 01 June 2025 22:32:09 +0000 (0:00:05.420) 0:05:00.660 *********** 2025-06-01 22:32:09.564920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:32:09.565082 | orchestrator | 2025-06-01 22:32:09.566175 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-01 22:32:09.566749 | orchestrator | Sunday 01 June 2025 22:32:09 +0000 (0:00:00.427) 0:05:01.087 *********** 2025-06-01 22:32:10.274360 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:10.274518 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:10.276495 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:10.277745 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:10.278860 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:10.280260 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:10.281284 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:10.282108 | orchestrator | 2025-06-01 22:32:10.282470 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-01 22:32:10.282960 | orchestrator | Sunday 01 June 2025 22:32:10 +0000 (0:00:00.708) 0:05:01.796 *********** 2025-06-01 22:32:11.893310 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:11.893455 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:11.893473 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:11.893546 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:11.893950 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:11.894239 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:11.894738 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:11.895037 | orchestrator | 2025-06-01 22:32:11.895343 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-01 22:32:11.895743 | orchestrator | Sunday 01 June 2025 22:32:11 +0000 (0:00:01.620) 0:05:03.416 *********** 2025-06-01 22:32:12.700208 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:12.700553 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:12.700584 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:12.701438 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:12.702317 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:12.703305 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:12.704265 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:12.705170 | orchestrator | 2025-06-01 22:32:12.706155 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-01 22:32:12.707078 | orchestrator | Sunday 01 June 2025 22:32:12 +0000 (0:00:00.808) 0:05:04.224 *********** 2025-06-01 22:32:12.784331 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:12.876252 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:12.910119 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:12.943323 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:12.996498 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:12.997285 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:12.998895 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:12.999685 | orchestrator | 2025-06-01 22:32:13.003116 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-01 22:32:13.003937 | orchestrator | Sunday 01 June 2025 22:32:12 +0000 (0:00:00.296) 0:05:04.520 *********** 2025-06-01 22:32:13.066376 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:13.139826 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:13.171850 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:13.208380 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:13.383352 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:13.383893 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:13.384369 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:13.384607 | orchestrator | 2025-06-01 22:32:13.390352 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-01 22:32:13.390435 | orchestrator | Sunday 01 June 2025 22:32:13 +0000 (0:00:00.384) 0:05:04.905 *********** 2025-06-01 22:32:13.467732 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:13.503126 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:13.536490 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:13.625034 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:13.695320 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:13.695932 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:13.697040 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:13.700573 | orchestrator | 2025-06-01 22:32:13.700598 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-01 22:32:13.700611 | orchestrator | Sunday 01 June 2025 22:32:13 +0000 (0:00:00.312) 0:05:05.218 *********** 2025-06-01 22:32:13.811060 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:13.844076 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:13.875609 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:13.913064 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:13.983104 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:13.983281 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:13.983702 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:13.984268 | orchestrator | 2025-06-01 22:32:13.984783 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-01 22:32:13.985519 | orchestrator | Sunday 01 June 2025 22:32:13 +0000 (0:00:00.289) 0:05:05.508 *********** 2025-06-01 22:32:14.091052 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:14.150216 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:14.186471 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:14.224467 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:14.301666 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:14.302244 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:14.303438 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:14.304429 | orchestrator | 2025-06-01 22:32:14.305234 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-01 22:32:14.306137 | orchestrator | Sunday 01 June 2025 22:32:14 +0000 (0:00:00.316) 0:05:05.824 *********** 2025-06-01 22:32:14.420899 | orchestrator | ok: [testbed-manager] =>  2025-06-01 22:32:14.421204 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.459261 | orchestrator | ok: [testbed-node-3] =>  2025-06-01 22:32:14.462241 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.491338 | orchestrator | ok: [testbed-node-4] =>  2025-06-01 22:32:14.492075 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.538841 | orchestrator | ok: [testbed-node-5] =>  2025-06-01 22:32:14.539089 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.603805 | orchestrator | ok: [testbed-node-0] =>  2025-06-01 22:32:14.604171 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.604857 | orchestrator | ok: [testbed-node-1] =>  2025-06-01 22:32:14.605215 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.606258 | orchestrator | ok: [testbed-node-2] =>  2025-06-01 22:32:14.606861 | orchestrator |  docker_version: 5:27.5.1 2025-06-01 22:32:14.607578 | orchestrator | 2025-06-01 22:32:14.608232 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-01 22:32:14.608732 | orchestrator | Sunday 01 June 2025 22:32:14 +0000 (0:00:00.304) 0:05:06.129 *********** 2025-06-01 22:32:14.848520 | orchestrator | ok: [testbed-manager] =>  2025-06-01 22:32:14.848981 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:14.885832 | orchestrator | ok: [testbed-node-3] =>  2025-06-01 22:32:14.886913 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:14.929805 | orchestrator | ok: [testbed-node-4] =>  2025-06-01 22:32:14.929883 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:14.964669 | orchestrator | ok: [testbed-node-5] =>  2025-06-01 22:32:14.965720 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:15.047681 | orchestrator | ok: [testbed-node-0] =>  2025-06-01 22:32:15.047803 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:15.047876 | orchestrator | ok: [testbed-node-1] =>  2025-06-01 22:32:15.047952 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:15.048607 | orchestrator | ok: [testbed-node-2] =>  2025-06-01 22:32:15.049146 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-01 22:32:15.049944 | orchestrator | 2025-06-01 22:32:15.050527 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-01 22:32:15.051129 | orchestrator | Sunday 01 June 2025 22:32:15 +0000 (0:00:00.441) 0:05:06.570 *********** 2025-06-01 22:32:15.130854 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:15.162694 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:15.193208 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:15.236055 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:15.277656 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:15.346093 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:15.347180 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:15.348682 | orchestrator | 2025-06-01 22:32:15.350566 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-01 22:32:15.350649 | orchestrator | Sunday 01 June 2025 22:32:15 +0000 (0:00:00.298) 0:05:06.869 *********** 2025-06-01 22:32:15.441384 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:15.516978 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:15.549939 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:15.581296 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:15.646614 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:15.647481 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:15.648843 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:15.650509 | orchestrator | 2025-06-01 22:32:15.651235 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-01 22:32:15.652133 | orchestrator | Sunday 01 June 2025 22:32:15 +0000 (0:00:00.301) 0:05:07.171 *********** 2025-06-01 22:32:16.086453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:32:16.088163 | orchestrator | 2025-06-01 22:32:16.090980 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-01 22:32:16.091949 | orchestrator | Sunday 01 June 2025 22:32:16 +0000 (0:00:00.438) 0:05:07.609 *********** 2025-06-01 22:32:16.909576 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:16.911108 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:16.911144 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:16.912220 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:16.913858 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:16.915025 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:16.915913 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:16.916818 | orchestrator | 2025-06-01 22:32:16.918335 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-01 22:32:16.919217 | orchestrator | Sunday 01 June 2025 22:32:16 +0000 (0:00:00.818) 0:05:08.428 *********** 2025-06-01 22:32:19.686968 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:32:19.687830 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:32:19.689430 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:32:19.690732 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:32:19.692093 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:32:19.692555 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:19.693673 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:32:19.694655 | orchestrator | 2025-06-01 22:32:19.695416 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-01 22:32:19.696485 | orchestrator | Sunday 01 June 2025 22:32:19 +0000 (0:00:02.781) 0:05:11.209 *********** 2025-06-01 22:32:19.761272 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-01 22:32:19.761668 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-01 22:32:19.843216 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-01 22:32:19.843636 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-01 22:32:19.843661 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-01 22:32:19.844279 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-01 22:32:19.917458 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:32:19.918783 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-01 22:32:19.923058 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-01 22:32:20.154957 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:20.155174 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-01 22:32:20.157342 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-01 22:32:20.162375 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-01 22:32:20.162405 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-01 22:32:20.230578 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:20.231654 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-01 22:32:20.232505 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-01 22:32:20.233995 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-01 22:32:20.308829 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:20.310008 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-01 22:32:20.310829 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-01 22:32:20.468191 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-01 22:32:20.468534 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:20.469732 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:20.473236 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-01 22:32:20.473271 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-01 22:32:20.473282 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-01 22:32:20.475642 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:20.475978 | orchestrator | 2025-06-01 22:32:20.476726 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-01 22:32:20.478256 | orchestrator | Sunday 01 June 2025 22:32:20 +0000 (0:00:00.781) 0:05:11.991 *********** 2025-06-01 22:32:26.189864 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:26.189961 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:26.190593 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:26.191268 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:26.193038 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:26.193568 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:26.194723 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:26.195838 | orchestrator | 2025-06-01 22:32:26.197322 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-01 22:32:26.198073 | orchestrator | Sunday 01 June 2025 22:32:26 +0000 (0:00:05.719) 0:05:17.710 *********** 2025-06-01 22:32:27.211711 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:27.211872 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:27.211889 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:27.212954 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:27.214005 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:27.214916 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:27.215881 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:27.217062 | orchestrator | 2025-06-01 22:32:27.217862 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-01 22:32:27.218707 | orchestrator | Sunday 01 June 2025 22:32:27 +0000 (0:00:01.021) 0:05:18.731 *********** 2025-06-01 22:32:34.038960 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:34.039222 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:34.039287 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:34.039373 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:34.042897 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:34.044251 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:34.045259 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:34.046217 | orchestrator | 2025-06-01 22:32:34.046923 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-01 22:32:34.047687 | orchestrator | Sunday 01 June 2025 22:32:34 +0000 (0:00:06.830) 0:05:25.562 *********** 2025-06-01 22:32:37.130863 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:37.131907 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:37.133895 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:37.136279 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:37.137765 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:37.138641 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:37.139546 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:37.140550 | orchestrator | 2025-06-01 22:32:37.141219 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-01 22:32:37.141246 | orchestrator | Sunday 01 June 2025 22:32:37 +0000 (0:00:03.089) 0:05:28.651 *********** 2025-06-01 22:32:38.784793 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:38.787750 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:38.787787 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:38.787799 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:38.788217 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:38.789469 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:38.790794 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:38.791949 | orchestrator | 2025-06-01 22:32:38.792702 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-01 22:32:38.795994 | orchestrator | Sunday 01 June 2025 22:32:38 +0000 (0:00:01.653) 0:05:30.305 *********** 2025-06-01 22:32:40.085357 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:40.085754 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:40.086530 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:40.087884 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:40.088190 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:40.089421 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:40.089563 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:40.090418 | orchestrator | 2025-06-01 22:32:40.090943 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-01 22:32:40.091361 | orchestrator | Sunday 01 June 2025 22:32:40 +0000 (0:00:01.302) 0:05:31.608 *********** 2025-06-01 22:32:40.314327 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:32:40.379504 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:32:40.445478 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:32:40.519485 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:32:40.662899 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:32:40.663099 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:32:40.663794 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:40.665200 | orchestrator | 2025-06-01 22:32:40.665854 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-01 22:32:40.666255 | orchestrator | Sunday 01 June 2025 22:32:40 +0000 (0:00:00.577) 0:05:32.186 *********** 2025-06-01 22:32:49.703062 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:49.703578 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:49.705237 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:49.706185 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:49.707260 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:49.708528 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:49.709203 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:49.709648 | orchestrator | 2025-06-01 22:32:49.710544 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-01 22:32:49.710972 | orchestrator | Sunday 01 June 2025 22:32:49 +0000 (0:00:09.038) 0:05:41.224 *********** 2025-06-01 22:32:50.610111 | orchestrator | changed: [testbed-manager] 2025-06-01 22:32:50.610833 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:50.611754 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:50.613245 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:50.614101 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:50.614875 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:50.615579 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:50.616295 | orchestrator | 2025-06-01 22:32:50.616926 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-01 22:32:50.617780 | orchestrator | Sunday 01 June 2025 22:32:50 +0000 (0:00:00.909) 0:05:42.134 *********** 2025-06-01 22:32:59.705414 | orchestrator | ok: [testbed-manager] 2025-06-01 22:32:59.705917 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:32:59.707393 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:32:59.708254 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:32:59.709153 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:32:59.709945 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:32:59.710246 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:32:59.711281 | orchestrator | 2025-06-01 22:32:59.711291 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-01 22:32:59.712088 | orchestrator | Sunday 01 June 2025 22:32:59 +0000 (0:00:09.093) 0:05:51.228 *********** 2025-06-01 22:33:09.566452 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:09.568538 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:09.568912 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:09.569729 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:09.572299 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:09.572528 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:09.573029 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:09.573762 | orchestrator | 2025-06-01 22:33:09.574611 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-01 22:33:09.575813 | orchestrator | Sunday 01 June 2025 22:33:09 +0000 (0:00:09.858) 0:06:01.087 *********** 2025-06-01 22:33:09.983993 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-01 22:33:10.736143 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-01 22:33:10.736819 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-01 22:33:10.738829 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-01 22:33:10.739217 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-01 22:33:10.740202 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-01 22:33:10.741273 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-01 22:33:10.741570 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-01 22:33:10.742186 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-01 22:33:10.742910 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-01 22:33:10.744012 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-01 22:33:10.744288 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-01 22:33:10.745371 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-01 22:33:10.746125 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-01 22:33:10.747056 | orchestrator | 2025-06-01 22:33:10.748088 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-01 22:33:10.748480 | orchestrator | Sunday 01 June 2025 22:33:10 +0000 (0:00:01.171) 0:06:02.258 *********** 2025-06-01 22:33:10.866291 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:10.935402 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:11.007267 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:11.089404 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:11.156556 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:11.283285 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:11.284453 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:11.285127 | orchestrator | 2025-06-01 22:33:11.286469 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-01 22:33:11.289056 | orchestrator | Sunday 01 June 2025 22:33:11 +0000 (0:00:00.549) 0:06:02.808 *********** 2025-06-01 22:33:14.890138 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:14.891765 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:14.893375 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:14.896775 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:14.897426 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:14.899295 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:14.900331 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:14.901722 | orchestrator | 2025-06-01 22:33:14.902850 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-01 22:33:14.904005 | orchestrator | Sunday 01 June 2025 22:33:14 +0000 (0:00:03.603) 0:06:06.411 *********** 2025-06-01 22:33:15.034437 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:15.106856 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:15.169537 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:15.251292 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:15.314729 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:15.421498 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:15.422545 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:15.423935 | orchestrator | 2025-06-01 22:33:15.424488 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-01 22:33:15.425504 | orchestrator | Sunday 01 June 2025 22:33:15 +0000 (0:00:00.530) 0:06:06.942 *********** 2025-06-01 22:33:15.498623 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-01 22:33:15.499722 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-01 22:33:15.571667 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:15.572869 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-01 22:33:15.573885 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-01 22:33:15.643954 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:15.644298 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-01 22:33:15.645139 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-01 22:33:15.721345 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:15.724845 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-01 22:33:15.726168 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-01 22:33:15.792356 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:15.793221 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-01 22:33:15.794490 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-01 22:33:15.867762 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:15.869145 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-01 22:33:15.869447 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-01 22:33:15.979236 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:15.981394 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-01 22:33:15.982066 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-01 22:33:15.983168 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:15.984315 | orchestrator | 2025-06-01 22:33:15.986234 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-01 22:33:15.987289 | orchestrator | Sunday 01 June 2025 22:33:15 +0000 (0:00:00.561) 0:06:07.503 *********** 2025-06-01 22:33:16.115221 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:16.209780 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:16.273491 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:16.338963 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:16.408294 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:16.518714 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:16.518872 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:16.519831 | orchestrator | 2025-06-01 22:33:16.521381 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-01 22:33:16.525263 | orchestrator | Sunday 01 June 2025 22:33:16 +0000 (0:00:00.537) 0:06:08.041 *********** 2025-06-01 22:33:16.657627 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:16.723910 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:16.792905 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:16.864375 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:16.948698 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:17.050693 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:17.051736 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:17.053081 | orchestrator | 2025-06-01 22:33:17.054778 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-01 22:33:17.056012 | orchestrator | Sunday 01 June 2025 22:33:17 +0000 (0:00:00.530) 0:06:08.572 *********** 2025-06-01 22:33:17.186497 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:17.256471 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:17.510215 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:17.573530 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:17.636411 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:17.785058 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:17.785343 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:17.786709 | orchestrator | 2025-06-01 22:33:17.787578 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-01 22:33:17.788380 | orchestrator | Sunday 01 June 2025 22:33:17 +0000 (0:00:00.736) 0:06:09.308 *********** 2025-06-01 22:33:19.410166 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:19.413786 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:19.413886 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:19.413903 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:19.413914 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:19.413925 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:19.413936 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:19.414001 | orchestrator | 2025-06-01 22:33:19.415974 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-01 22:33:19.416361 | orchestrator | Sunday 01 June 2025 22:33:19 +0000 (0:00:01.623) 0:06:10.931 *********** 2025-06-01 22:33:20.344979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:20.345869 | orchestrator | 2025-06-01 22:33:20.346975 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-01 22:33:20.350094 | orchestrator | Sunday 01 June 2025 22:33:20 +0000 (0:00:00.936) 0:06:11.868 *********** 2025-06-01 22:33:21.175919 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:21.176075 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:21.177018 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:21.178569 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:21.178596 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:21.179348 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:21.179885 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:21.180816 | orchestrator | 2025-06-01 22:33:21.181769 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-01 22:33:21.182098 | orchestrator | Sunday 01 June 2025 22:33:21 +0000 (0:00:00.828) 0:06:12.696 *********** 2025-06-01 22:33:21.611686 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:21.684285 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:22.273004 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:22.274280 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:22.275401 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:22.276213 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:22.276980 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:22.277876 | orchestrator | 2025-06-01 22:33:22.278419 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-01 22:33:22.278808 | orchestrator | Sunday 01 June 2025 22:33:22 +0000 (0:00:01.094) 0:06:13.791 *********** 2025-06-01 22:33:23.560088 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:23.561873 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:23.563833 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:23.565706 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:23.566747 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:23.568359 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:23.569480 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:23.574855 | orchestrator | 2025-06-01 22:33:23.575348 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-01 22:33:23.577861 | orchestrator | Sunday 01 June 2025 22:33:23 +0000 (0:00:01.291) 0:06:15.083 *********** 2025-06-01 22:33:23.733505 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:24.994501 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:24.995346 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:24.999593 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:25.000341 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:25.002322 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:25.003802 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:25.004877 | orchestrator | 2025-06-01 22:33:25.005934 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-01 22:33:25.006357 | orchestrator | Sunday 01 June 2025 22:33:24 +0000 (0:00:01.432) 0:06:16.515 *********** 2025-06-01 22:33:26.300757 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:26.301860 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:26.302613 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:26.303379 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:26.304141 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:26.305241 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:26.305517 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:26.306498 | orchestrator | 2025-06-01 22:33:26.306973 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-01 22:33:26.307776 | orchestrator | Sunday 01 June 2025 22:33:26 +0000 (0:00:01.306) 0:06:17.822 *********** 2025-06-01 22:33:27.952698 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:27.952854 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:27.953988 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:27.957279 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:27.957310 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:27.958663 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:27.960025 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:27.961053 | orchestrator | 2025-06-01 22:33:27.961838 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-01 22:33:27.963218 | orchestrator | Sunday 01 June 2025 22:33:27 +0000 (0:00:01.651) 0:06:19.474 *********** 2025-06-01 22:33:28.838845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:28.839892 | orchestrator | 2025-06-01 22:33:28.841241 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-01 22:33:28.844292 | orchestrator | Sunday 01 June 2025 22:33:28 +0000 (0:00:00.887) 0:06:20.361 *********** 2025-06-01 22:33:30.133955 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:30.135563 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:30.136188 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:30.138819 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:30.139856 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:30.141169 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:30.142207 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:30.143000 | orchestrator | 2025-06-01 22:33:30.143657 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-01 22:33:30.144823 | orchestrator | Sunday 01 June 2025 22:33:30 +0000 (0:00:01.295) 0:06:21.657 *********** 2025-06-01 22:33:31.239709 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:31.240087 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:31.241857 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:31.241886 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:31.242871 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:31.243842 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:31.244979 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:31.245490 | orchestrator | 2025-06-01 22:33:31.246209 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-01 22:33:31.246916 | orchestrator | Sunday 01 June 2025 22:33:31 +0000 (0:00:01.102) 0:06:22.760 *********** 2025-06-01 22:33:32.600772 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:32.601896 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:32.602905 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:32.603734 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:32.604346 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:32.604722 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:32.605145 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:32.605524 | orchestrator | 2025-06-01 22:33:32.605929 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-01 22:33:32.606236 | orchestrator | Sunday 01 June 2025 22:33:32 +0000 (0:00:01.363) 0:06:24.123 *********** 2025-06-01 22:33:33.720115 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:33.720646 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:33.721393 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:33.722529 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:33.723336 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:33.724112 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:33.724845 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:33.725450 | orchestrator | 2025-06-01 22:33:33.726013 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-01 22:33:33.726856 | orchestrator | Sunday 01 June 2025 22:33:33 +0000 (0:00:01.118) 0:06:25.241 *********** 2025-06-01 22:33:34.966754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:34.967542 | orchestrator | 2025-06-01 22:33:34.969991 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.970086 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.938) 0:06:26.179 *********** 2025-06-01 22:33:34.971125 | orchestrator | 2025-06-01 22:33:34.971910 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.973695 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.042) 0:06:26.222 *********** 2025-06-01 22:33:34.976524 | orchestrator | 2025-06-01 22:33:34.977905 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.980366 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.060) 0:06:26.283 *********** 2025-06-01 22:33:34.980412 | orchestrator | 2025-06-01 22:33:34.980746 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.982269 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.040) 0:06:26.324 *********** 2025-06-01 22:33:34.982307 | orchestrator | 2025-06-01 22:33:34.982327 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.982347 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.038) 0:06:26.362 *********** 2025-06-01 22:33:34.982459 | orchestrator | 2025-06-01 22:33:34.982814 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.983154 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.047) 0:06:26.410 *********** 2025-06-01 22:33:34.983398 | orchestrator | 2025-06-01 22:33:34.984368 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-01 22:33:34.984403 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.038) 0:06:26.448 *********** 2025-06-01 22:33:34.984568 | orchestrator | 2025-06-01 22:33:34.984882 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-01 22:33:34.985140 | orchestrator | Sunday 01 June 2025 22:33:34 +0000 (0:00:00.039) 0:06:26.487 *********** 2025-06-01 22:33:36.273943 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:36.275076 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:36.278081 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:36.278108 | orchestrator | 2025-06-01 22:33:36.278122 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-01 22:33:36.278135 | orchestrator | Sunday 01 June 2025 22:33:36 +0000 (0:00:01.306) 0:06:27.794 *********** 2025-06-01 22:33:37.579214 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:37.579395 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:37.580713 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:37.581433 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:37.582923 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:37.583991 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:37.584577 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:37.586200 | orchestrator | 2025-06-01 22:33:37.586533 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-01 22:33:37.587651 | orchestrator | Sunday 01 June 2025 22:33:37 +0000 (0:00:01.306) 0:06:29.101 *********** 2025-06-01 22:33:38.780576 | orchestrator | changed: [testbed-manager] 2025-06-01 22:33:38.781587 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:38.785049 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:38.786147 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:38.786727 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:38.787328 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:38.788047 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:38.788725 | orchestrator | 2025-06-01 22:33:38.789456 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-01 22:33:38.790253 | orchestrator | Sunday 01 June 2025 22:33:38 +0000 (0:00:01.200) 0:06:30.301 *********** 2025-06-01 22:33:38.921717 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:41.287922 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:41.288262 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:41.289358 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:41.291922 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:41.293352 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:41.293703 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:41.294494 | orchestrator | 2025-06-01 22:33:41.295749 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-01 22:33:41.296475 | orchestrator | Sunday 01 June 2025 22:33:41 +0000 (0:00:02.508) 0:06:32.810 *********** 2025-06-01 22:33:41.398463 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:41.399596 | orchestrator | 2025-06-01 22:33:41.399662 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-01 22:33:41.400457 | orchestrator | Sunday 01 June 2025 22:33:41 +0000 (0:00:00.108) 0:06:32.919 *********** 2025-06-01 22:33:42.419480 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:42.419772 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:33:42.421044 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:33:42.421448 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:33:42.422169 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:33:42.423379 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:33:42.424312 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:33:42.424828 | orchestrator | 2025-06-01 22:33:42.426330 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-01 22:33:42.426540 | orchestrator | Sunday 01 June 2025 22:33:42 +0000 (0:00:01.021) 0:06:33.941 *********** 2025-06-01 22:33:42.770774 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:42.840844 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:42.908285 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:42.976683 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:43.043769 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:43.172536 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:43.173417 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:43.175403 | orchestrator | 2025-06-01 22:33:43.178212 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-01 22:33:43.178243 | orchestrator | Sunday 01 June 2025 22:33:43 +0000 (0:00:00.755) 0:06:34.696 *********** 2025-06-01 22:33:44.096306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:44.097264 | orchestrator | 2025-06-01 22:33:44.097373 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-01 22:33:44.097957 | orchestrator | Sunday 01 June 2025 22:33:44 +0000 (0:00:00.920) 0:06:35.617 *********** 2025-06-01 22:33:44.567112 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:44.991674 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:44.991782 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:44.992706 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:44.993389 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:44.994203 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:44.995082 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:44.996310 | orchestrator | 2025-06-01 22:33:44.997161 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-01 22:33:44.998339 | orchestrator | Sunday 01 June 2025 22:33:44 +0000 (0:00:00.898) 0:06:36.515 *********** 2025-06-01 22:33:47.718162 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-01 22:33:47.718334 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-01 22:33:47.719998 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-01 22:33:47.720344 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-01 22:33:47.721732 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-01 22:33:47.722539 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-01 22:33:47.723174 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-01 22:33:47.723911 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-01 22:33:47.728042 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-01 22:33:47.730109 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-01 22:33:47.731096 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-01 22:33:47.732162 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-01 22:33:47.733221 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-01 22:33:47.734081 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-01 22:33:47.734367 | orchestrator | 2025-06-01 22:33:47.735562 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-01 22:33:47.736002 | orchestrator | Sunday 01 June 2025 22:33:47 +0000 (0:00:02.723) 0:06:39.239 *********** 2025-06-01 22:33:47.905272 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:47.998418 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:48.072005 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:48.139367 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:48.228983 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:48.346952 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:48.347071 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:48.347088 | orchestrator | 2025-06-01 22:33:48.348727 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-01 22:33:48.350866 | orchestrator | Sunday 01 June 2025 22:33:48 +0000 (0:00:00.623) 0:06:39.862 *********** 2025-06-01 22:33:49.161308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:33:49.162393 | orchestrator | 2025-06-01 22:33:49.163969 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-01 22:33:49.165159 | orchestrator | Sunday 01 June 2025 22:33:49 +0000 (0:00:00.819) 0:06:40.682 *********** 2025-06-01 22:33:49.739555 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:49.811233 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:50.233712 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:50.234241 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:50.235252 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:50.235919 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:50.237990 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:50.238976 | orchestrator | 2025-06-01 22:33:50.239835 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-01 22:33:50.240897 | orchestrator | Sunday 01 June 2025 22:33:50 +0000 (0:00:01.074) 0:06:41.756 *********** 2025-06-01 22:33:50.667915 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:51.056125 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:51.056296 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:51.056861 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:51.057785 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:51.057817 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:51.059096 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:51.059181 | orchestrator | 2025-06-01 22:33:51.059761 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-01 22:33:51.061917 | orchestrator | Sunday 01 June 2025 22:33:51 +0000 (0:00:00.821) 0:06:42.578 *********** 2025-06-01 22:33:51.208912 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:51.276068 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:51.343688 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:51.421355 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:51.490353 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:51.588698 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:51.590707 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:51.591873 | orchestrator | 2025-06-01 22:33:51.593295 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-01 22:33:51.593919 | orchestrator | Sunday 01 June 2025 22:33:51 +0000 (0:00:00.533) 0:06:43.111 *********** 2025-06-01 22:33:52.983238 | orchestrator | ok: [testbed-manager] 2025-06-01 22:33:52.983440 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:33:52.983461 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:33:52.983472 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:33:52.984210 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:33:52.986541 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:33:52.986563 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:33:52.987164 | orchestrator | 2025-06-01 22:33:52.987927 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-01 22:33:52.988482 | orchestrator | Sunday 01 June 2025 22:33:52 +0000 (0:00:01.394) 0:06:44.506 *********** 2025-06-01 22:33:53.114167 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:33:53.184119 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:33:53.249074 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:33:53.326649 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:33:53.399544 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:33:53.498342 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:33:53.498533 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:33:53.499516 | orchestrator | 2025-06-01 22:33:53.500860 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-01 22:33:53.504054 | orchestrator | Sunday 01 June 2025 22:33:53 +0000 (0:00:00.515) 0:06:45.021 *********** 2025-06-01 22:34:00.841124 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:00.842247 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:00.845203 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:00.846320 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:00.847160 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:00.847969 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:00.848344 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:00.848903 | orchestrator | 2025-06-01 22:34:00.849923 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-01 22:34:00.850205 | orchestrator | Sunday 01 June 2025 22:34:00 +0000 (0:00:07.341) 0:06:52.362 *********** 2025-06-01 22:34:02.170368 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:02.170539 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:02.171718 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:02.172485 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:02.173287 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:02.174279 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:02.175187 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:02.176537 | orchestrator | 2025-06-01 22:34:02.177832 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-01 22:34:02.178326 | orchestrator | Sunday 01 June 2025 22:34:02 +0000 (0:00:01.330) 0:06:53.693 *********** 2025-06-01 22:34:03.878679 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:03.879507 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:03.882534 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:03.882602 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:03.882880 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:03.883723 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:03.884355 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:03.885476 | orchestrator | 2025-06-01 22:34:03.885861 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-01 22:34:03.886309 | orchestrator | Sunday 01 June 2025 22:34:03 +0000 (0:00:01.706) 0:06:55.399 *********** 2025-06-01 22:34:05.727062 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:05.728262 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:05.729471 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:05.731997 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:05.732024 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:05.732688 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:05.733326 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:05.733724 | orchestrator | 2025-06-01 22:34:05.734731 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 22:34:05.734904 | orchestrator | Sunday 01 June 2025 22:34:05 +0000 (0:00:01.849) 0:06:57.248 *********** 2025-06-01 22:34:06.594299 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:06.595236 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:06.596786 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:06.597396 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:06.598949 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:06.600015 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:06.601067 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:06.602134 | orchestrator | 2025-06-01 22:34:06.603354 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 22:34:06.604117 | orchestrator | Sunday 01 June 2025 22:34:06 +0000 (0:00:00.868) 0:06:58.117 *********** 2025-06-01 22:34:06.771489 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:06.870873 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:06.941798 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:07.028724 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:07.105673 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:07.540949 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:07.542315 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:07.543510 | orchestrator | 2025-06-01 22:34:07.545982 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-01 22:34:07.546486 | orchestrator | Sunday 01 June 2025 22:34:07 +0000 (0:00:00.946) 0:06:59.063 *********** 2025-06-01 22:34:07.675677 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:07.760208 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:07.840607 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:07.916172 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:08.029068 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:08.151220 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:08.151413 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:08.152228 | orchestrator | 2025-06-01 22:34:08.152843 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-01 22:34:08.153256 | orchestrator | Sunday 01 June 2025 22:34:08 +0000 (0:00:00.609) 0:06:59.673 *********** 2025-06-01 22:34:08.303291 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:08.371248 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:08.437168 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:08.509819 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:08.809929 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:08.926933 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:08.928517 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:08.929972 | orchestrator | 2025-06-01 22:34:08.931966 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-01 22:34:08.933465 | orchestrator | Sunday 01 June 2025 22:34:08 +0000 (0:00:00.776) 0:07:00.449 *********** 2025-06-01 22:34:09.068723 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:09.138750 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:09.200664 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:09.271440 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:09.338205 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:09.454356 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:09.454995 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:09.455541 | orchestrator | 2025-06-01 22:34:09.455845 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-01 22:34:09.456316 | orchestrator | Sunday 01 June 2025 22:34:09 +0000 (0:00:00.528) 0:07:00.978 *********** 2025-06-01 22:34:09.617760 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:09.684295 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:09.756471 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:09.821689 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:09.897348 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:10.007835 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:10.008956 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:10.010494 | orchestrator | 2025-06-01 22:34:10.011231 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-01 22:34:10.012324 | orchestrator | Sunday 01 June 2025 22:34:10 +0000 (0:00:00.553) 0:07:01.531 *********** 2025-06-01 22:34:15.562387 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:15.563253 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:15.565824 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:15.566639 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:15.568377 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:15.568991 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:15.570283 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:15.571289 | orchestrator | 2025-06-01 22:34:15.572369 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-01 22:34:15.573463 | orchestrator | Sunday 01 June 2025 22:34:15 +0000 (0:00:05.553) 0:07:07.085 *********** 2025-06-01 22:34:15.783925 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:15.852377 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:15.924658 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:15.987584 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:16.101323 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:16.102292 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:16.104825 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:16.105580 | orchestrator | 2025-06-01 22:34:16.106304 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-01 22:34:16.107170 | orchestrator | Sunday 01 June 2025 22:34:16 +0000 (0:00:00.538) 0:07:07.624 *********** 2025-06-01 22:34:17.141310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:34:17.142480 | orchestrator | 2025-06-01 22:34:17.144615 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-01 22:34:17.144954 | orchestrator | Sunday 01 June 2025 22:34:17 +0000 (0:00:01.035) 0:07:08.659 *********** 2025-06-01 22:34:18.855534 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:18.857759 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:18.864297 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:18.864345 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:18.864721 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:18.865337 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:18.865916 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:18.866133 | orchestrator | 2025-06-01 22:34:18.866453 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-01 22:34:18.867091 | orchestrator | Sunday 01 June 2025 22:34:18 +0000 (0:00:01.717) 0:07:10.377 *********** 2025-06-01 22:34:20.006260 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:20.007517 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:20.010362 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:20.011452 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:20.012749 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:20.013841 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:20.014948 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:20.015969 | orchestrator | 2025-06-01 22:34:20.017532 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-01 22:34:20.018638 | orchestrator | Sunday 01 June 2025 22:34:19 +0000 (0:00:01.152) 0:07:11.529 *********** 2025-06-01 22:34:21.061654 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:21.063102 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:21.064786 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:21.065789 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:21.066744 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:21.068103 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:21.068598 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:21.069705 | orchestrator | 2025-06-01 22:34:21.070355 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-01 22:34:21.071177 | orchestrator | Sunday 01 June 2025 22:34:21 +0000 (0:00:01.052) 0:07:12.581 *********** 2025-06-01 22:34:22.747898 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.749414 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.750503 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.751835 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.752920 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.753832 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.754719 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-01 22:34:22.755415 | orchestrator | 2025-06-01 22:34:22.757035 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-01 22:34:22.757057 | orchestrator | Sunday 01 June 2025 22:34:22 +0000 (0:00:01.686) 0:07:14.268 *********** 2025-06-01 22:34:23.533836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:34:23.535075 | orchestrator | 2025-06-01 22:34:23.536127 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-01 22:34:23.537708 | orchestrator | Sunday 01 June 2025 22:34:23 +0000 (0:00:00.786) 0:07:15.054 *********** 2025-06-01 22:34:32.558931 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:32.563159 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:32.563199 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:32.565758 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:32.567626 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:32.568014 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:32.569667 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:32.570419 | orchestrator | 2025-06-01 22:34:32.571610 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-01 22:34:32.572375 | orchestrator | Sunday 01 June 2025 22:34:32 +0000 (0:00:09.025) 0:07:24.080 *********** 2025-06-01 22:34:34.316163 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:34.316435 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:34.319948 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:34.320190 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:34.320215 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:34.320227 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:34.320238 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:34.320249 | orchestrator | 2025-06-01 22:34:34.320777 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-01 22:34:34.321649 | orchestrator | Sunday 01 June 2025 22:34:34 +0000 (0:00:01.757) 0:07:25.837 *********** 2025-06-01 22:34:35.574144 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:35.574866 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:35.575705 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:35.576364 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:35.580032 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:35.580067 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:35.580079 | orchestrator | 2025-06-01 22:34:35.580795 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-01 22:34:35.581495 | orchestrator | Sunday 01 June 2025 22:34:35 +0000 (0:00:01.257) 0:07:27.094 *********** 2025-06-01 22:34:37.047035 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:37.048030 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:37.050989 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:37.051018 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:37.051588 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:37.052369 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:37.052803 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:37.053724 | orchestrator | 2025-06-01 22:34:37.054252 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-01 22:34:37.055087 | orchestrator | 2025-06-01 22:34:37.055666 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-01 22:34:37.056274 | orchestrator | Sunday 01 June 2025 22:34:37 +0000 (0:00:01.475) 0:07:28.570 *********** 2025-06-01 22:34:37.182501 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:37.242902 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:37.310588 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:37.371145 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:37.457069 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:37.593483 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:37.596022 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:37.596333 | orchestrator | 2025-06-01 22:34:37.597025 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-01 22:34:37.597875 | orchestrator | 2025-06-01 22:34:37.598726 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-01 22:34:37.599681 | orchestrator | Sunday 01 June 2025 22:34:37 +0000 (0:00:00.547) 0:07:29.117 *********** 2025-06-01 22:34:38.926113 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:38.929896 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:38.929958 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:38.929971 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:38.930861 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:38.931134 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:38.931648 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:38.932957 | orchestrator | 2025-06-01 22:34:38.933176 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-01 22:34:38.933847 | orchestrator | Sunday 01 June 2025 22:34:38 +0000 (0:00:01.329) 0:07:30.447 *********** 2025-06-01 22:34:40.524212 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:40.524746 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:40.527426 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:40.527568 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:40.528720 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:40.529936 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:40.530716 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:40.532090 | orchestrator | 2025-06-01 22:34:40.534489 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-01 22:34:40.535289 | orchestrator | Sunday 01 June 2025 22:34:40 +0000 (0:00:01.599) 0:07:32.046 *********** 2025-06-01 22:34:40.647254 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:34:40.742956 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:34:40.833107 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:34:40.894198 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:34:40.977375 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:34:41.389189 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:34:41.389791 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:34:41.391700 | orchestrator | 2025-06-01 22:34:41.391846 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-01 22:34:41.393120 | orchestrator | Sunday 01 June 2025 22:34:41 +0000 (0:00:00.864) 0:07:32.911 *********** 2025-06-01 22:34:42.576306 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:42.579445 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:42.579916 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:42.580737 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:42.581732 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:42.582789 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:42.583813 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:42.584164 | orchestrator | 2025-06-01 22:34:42.584581 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-01 22:34:42.584987 | orchestrator | 2025-06-01 22:34:42.585837 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-01 22:34:42.586008 | orchestrator | Sunday 01 June 2025 22:34:42 +0000 (0:00:01.189) 0:07:34.101 *********** 2025-06-01 22:34:43.595136 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:34:43.599961 | orchestrator | 2025-06-01 22:34:43.600010 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-01 22:34:43.600025 | orchestrator | Sunday 01 June 2025 22:34:43 +0000 (0:00:01.016) 0:07:35.117 *********** 2025-06-01 22:34:44.459771 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:44.459891 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:44.459917 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:44.459938 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:44.460859 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:44.460944 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:44.461030 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:44.462252 | orchestrator | 2025-06-01 22:34:44.463078 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-01 22:34:44.463553 | orchestrator | Sunday 01 June 2025 22:34:44 +0000 (0:00:00.863) 0:07:35.981 *********** 2025-06-01 22:34:45.625778 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:45.626099 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:45.626733 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:45.627030 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:45.627954 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:45.631517 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:45.631555 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:45.631567 | orchestrator | 2025-06-01 22:34:45.631580 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-01 22:34:45.631593 | orchestrator | Sunday 01 June 2025 22:34:45 +0000 (0:00:01.166) 0:07:37.148 *********** 2025-06-01 22:34:46.657637 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 22:34:46.657952 | orchestrator | 2025-06-01 22:34:46.659211 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-01 22:34:46.660325 | orchestrator | Sunday 01 June 2025 22:34:46 +0000 (0:00:01.030) 0:07:38.179 *********** 2025-06-01 22:34:47.493656 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:47.495551 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:47.495821 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:47.497143 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:47.498205 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:47.498771 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:47.499184 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:47.500129 | orchestrator | 2025-06-01 22:34:47.500934 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-01 22:34:47.501359 | orchestrator | Sunday 01 June 2025 22:34:47 +0000 (0:00:00.835) 0:07:39.015 *********** 2025-06-01 22:34:48.574187 | orchestrator | changed: [testbed-manager] 2025-06-01 22:34:48.574406 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:34:48.577451 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:34:48.578093 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:34:48.579886 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:34:48.580325 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:34:48.581760 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:34:48.582132 | orchestrator | 2025-06-01 22:34:48.583980 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:34:48.584044 | orchestrator | 2025-06-01 22:34:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:34:48.584062 | orchestrator | 2025-06-01 22:34:48 | INFO  | Please wait and do not abort execution. 2025-06-01 22:34:48.585157 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-01 22:34:48.585778 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:34:48.586960 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:34:48.586988 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:34:48.587558 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-01 22:34:48.588279 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:34:48.588735 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-01 22:34:48.589155 | orchestrator | 2025-06-01 22:34:48.589882 | orchestrator | 2025-06-01 22:34:48.590189 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:34:48.591090 | orchestrator | Sunday 01 June 2025 22:34:48 +0000 (0:00:01.081) 0:07:40.097 *********** 2025-06-01 22:34:48.592882 | orchestrator | =============================================================================== 2025-06-01 22:34:48.593643 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.33s 2025-06-01 22:34:48.593786 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.41s 2025-06-01 22:34:48.594273 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.60s 2025-06-01 22:34:48.595237 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.08s 2025-06-01 22:34:48.595897 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.01s 2025-06-01 22:34:48.595917 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.26s 2025-06-01 22:34:48.596677 | orchestrator | osism.services.docker : Install docker package -------------------------- 9.86s 2025-06-01 22:34:48.596763 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.09s 2025-06-01 22:34:48.597272 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.05s 2025-06-01 22:34:48.598007 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.04s 2025-06-01 22:34:48.598337 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.03s 2025-06-01 22:34:48.598830 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.41s 2025-06-01 22:34:48.599110 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.34s 2025-06-01 22:34:48.599422 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.06s 2025-06-01 22:34:48.599819 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.05s 2025-06-01 22:34:48.600293 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.83s 2025-06-01 22:34:48.600773 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.72s 2025-06-01 22:34:48.601146 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.58s 2025-06-01 22:34:48.601471 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.58s 2025-06-01 22:34:48.601818 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.55s 2025-06-01 22:34:49.333907 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-01 22:34:49.334011 | orchestrator | + osism apply network 2025-06-01 22:34:51.591894 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:34:51.591987 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:34:51.592001 | orchestrator | Registering Redlock._release_script 2025-06-01 22:34:51.663717 | orchestrator | 2025-06-01 22:34:51 | INFO  | Task 618fdc13-334e-484f-8cc1-1dd9407787dc (network) was prepared for execution. 2025-06-01 22:34:51.663789 | orchestrator | 2025-06-01 22:34:51 | INFO  | It takes a moment until task 618fdc13-334e-484f-8cc1-1dd9407787dc (network) has been started and output is visible here. 2025-06-01 22:34:56.042372 | orchestrator | 2025-06-01 22:34:56.042423 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-01 22:34:56.042923 | orchestrator | 2025-06-01 22:34:56.043752 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-01 22:34:56.044652 | orchestrator | Sunday 01 June 2025 22:34:56 +0000 (0:00:00.310) 0:00:00.310 *********** 2025-06-01 22:34:56.193282 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:56.270620 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:56.346074 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:56.441879 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:56.644176 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:56.776679 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:56.777285 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:56.778346 | orchestrator | 2025-06-01 22:34:56.781784 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-01 22:34:56.782463 | orchestrator | Sunday 01 June 2025 22:34:56 +0000 (0:00:00.734) 0:00:01.045 *********** 2025-06-01 22:34:57.986753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:34:57.986910 | orchestrator | 2025-06-01 22:34:57.987997 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-01 22:34:57.988940 | orchestrator | Sunday 01 June 2025 22:34:57 +0000 (0:00:01.203) 0:00:02.248 *********** 2025-06-01 22:34:59.828174 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:34:59.832283 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:34:59.832332 | orchestrator | ok: [testbed-manager] 2025-06-01 22:34:59.832344 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:34:59.832803 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:34:59.833313 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:34:59.835245 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:34:59.835665 | orchestrator | 2025-06-01 22:34:59.838089 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-01 22:34:59.838118 | orchestrator | Sunday 01 June 2025 22:34:59 +0000 (0:00:01.849) 0:00:04.098 *********** 2025-06-01 22:35:01.623758 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:01.624165 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:01.628328 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:01.628906 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:01.629710 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:01.630938 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:01.631437 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:01.632579 | orchestrator | 2025-06-01 22:35:01.633029 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-01 22:35:01.634351 | orchestrator | Sunday 01 June 2025 22:35:01 +0000 (0:00:01.792) 0:00:05.890 *********** 2025-06-01 22:35:02.159147 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-01 22:35:02.159427 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-01 22:35:02.160554 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-01 22:35:02.577542 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-01 22:35:02.577790 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-01 22:35:02.579131 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-01 22:35:02.581517 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-01 22:35:02.581584 | orchestrator | 2025-06-01 22:35:02.582406 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-01 22:35:02.583821 | orchestrator | Sunday 01 June 2025 22:35:02 +0000 (0:00:00.959) 0:00:06.849 *********** 2025-06-01 22:35:06.007562 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 22:35:06.009166 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:35:06.012653 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:35:06.013871 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 22:35:06.014535 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 22:35:06.015284 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 22:35:06.016051 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 22:35:06.017118 | orchestrator | 2025-06-01 22:35:06.017225 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-01 22:35:06.018109 | orchestrator | Sunday 01 June 2025 22:35:05 +0000 (0:00:03.424) 0:00:10.274 *********** 2025-06-01 22:35:07.448102 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:07.448759 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:07.449788 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:07.451187 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:07.452096 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:07.453269 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:07.454386 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:07.455186 | orchestrator | 2025-06-01 22:35:07.455770 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-01 22:35:07.456282 | orchestrator | Sunday 01 June 2025 22:35:07 +0000 (0:00:01.444) 0:00:11.718 *********** 2025-06-01 22:35:09.602552 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 22:35:09.603045 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 22:35:09.603632 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 22:35:09.604527 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 22:35:09.604871 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-01 22:35:09.605517 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-01 22:35:09.605928 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-01 22:35:09.607094 | orchestrator | 2025-06-01 22:35:09.607121 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-01 22:35:09.607304 | orchestrator | Sunday 01 June 2025 22:35:09 +0000 (0:00:02.154) 0:00:13.873 *********** 2025-06-01 22:35:10.038406 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:10.332764 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:10.729244 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:10.730278 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:10.734008 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:10.735299 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:10.735845 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:10.737215 | orchestrator | 2025-06-01 22:35:10.740110 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-01 22:35:10.740679 | orchestrator | Sunday 01 June 2025 22:35:10 +0000 (0:00:01.123) 0:00:14.996 *********** 2025-06-01 22:35:10.911560 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:11.017417 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:11.129692 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:11.227481 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:11.340214 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:11.509066 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:11.510132 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:11.510959 | orchestrator | 2025-06-01 22:35:11.512083 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-01 22:35:11.512658 | orchestrator | Sunday 01 June 2025 22:35:11 +0000 (0:00:00.784) 0:00:15.781 *********** 2025-06-01 22:35:13.563749 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:13.565478 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:13.566802 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:13.568004 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:13.569599 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:13.570621 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:13.571327 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:13.572384 | orchestrator | 2025-06-01 22:35:13.572834 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-01 22:35:13.573535 | orchestrator | Sunday 01 June 2025 22:35:13 +0000 (0:00:02.049) 0:00:17.830 *********** 2025-06-01 22:35:13.840954 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:13.924519 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:14.016016 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:14.101779 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:14.434634 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:14.435580 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:14.439110 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-01 22:35:14.439160 | orchestrator | 2025-06-01 22:35:14.439174 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-01 22:35:14.440375 | orchestrator | Sunday 01 June 2025 22:35:14 +0000 (0:00:00.875) 0:00:18.705 *********** 2025-06-01 22:35:16.078780 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:16.080095 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:35:16.081641 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:35:16.083963 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:35:16.086313 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:35:16.086350 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:35:16.086362 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:35:16.087048 | orchestrator | 2025-06-01 22:35:16.088851 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-01 22:35:16.088874 | orchestrator | Sunday 01 June 2025 22:35:16 +0000 (0:00:01.639) 0:00:20.345 *********** 2025-06-01 22:35:17.366981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:35:17.367812 | orchestrator | 2025-06-01 22:35:17.369074 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-01 22:35:17.370234 | orchestrator | Sunday 01 June 2025 22:35:17 +0000 (0:00:01.288) 0:00:21.634 *********** 2025-06-01 22:35:18.494677 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:18.496381 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:18.497100 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:18.498125 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:18.498858 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:18.499713 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:18.500632 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:18.501749 | orchestrator | 2025-06-01 22:35:18.502876 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-01 22:35:18.503884 | orchestrator | Sunday 01 June 2025 22:35:18 +0000 (0:00:01.128) 0:00:22.762 *********** 2025-06-01 22:35:18.692203 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:18.782842 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:18.869181 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:18.969297 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:19.054591 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:19.197754 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:19.197964 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:19.202494 | orchestrator | 2025-06-01 22:35:19.202525 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-01 22:35:19.202539 | orchestrator | Sunday 01 June 2025 22:35:19 +0000 (0:00:00.702) 0:00:23.464 *********** 2025-06-01 22:35:19.620174 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:19.621735 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:19.972032 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:19.972195 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:19.973886 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:19.974334 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:19.975584 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:19.976203 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:19.976901 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:19.977618 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:20.448119 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:20.449383 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:20.450871 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-01 22:35:20.453326 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-01 22:35:20.453364 | orchestrator | 2025-06-01 22:35:20.453379 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-01 22:35:20.453412 | orchestrator | Sunday 01 June 2025 22:35:20 +0000 (0:00:01.245) 0:00:24.710 *********** 2025-06-01 22:35:20.634824 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:20.718354 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:20.802141 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:20.883326 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:20.962706 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:21.098514 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:21.098995 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:21.099833 | orchestrator | 2025-06-01 22:35:21.100983 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-01 22:35:21.101552 | orchestrator | Sunday 01 June 2025 22:35:21 +0000 (0:00:00.659) 0:00:25.370 *********** 2025-06-01 22:35:24.733693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-06-01 22:35:24.734288 | orchestrator | 2025-06-01 22:35:24.738657 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-01 22:35:24.739604 | orchestrator | Sunday 01 June 2025 22:35:24 +0000 (0:00:03.630) 0:00:29.000 *********** 2025-06-01 22:35:29.632372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.638216 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.639384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.640640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.642297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.642323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.643669 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.644591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.645077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:29.646970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.648116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.648529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.650256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.651214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:29.652827 | orchestrator | 2025-06-01 22:35:29.653534 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-01 22:35:29.654287 | orchestrator | Sunday 01 June 2025 22:35:29 +0000 (0:00:04.898) 0:00:33.899 *********** 2025-06-01 22:35:34.449008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.449850 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.451813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.452180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.453677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.454205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.455057 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.455853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.456570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-01 22:35:34.458062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.458457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.459171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.459538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.460781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-01 22:35:34.461037 | orchestrator | 2025-06-01 22:35:34.461871 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-01 22:35:34.462110 | orchestrator | Sunday 01 June 2025 22:35:34 +0000 (0:00:04.819) 0:00:38.719 *********** 2025-06-01 22:35:35.726707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:35:35.727530 | orchestrator | 2025-06-01 22:35:35.730614 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-01 22:35:35.730723 | orchestrator | Sunday 01 June 2025 22:35:35 +0000 (0:00:01.274) 0:00:39.994 *********** 2025-06-01 22:35:36.228666 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:36.532537 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:35:36.950219 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:35:36.952594 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:35:36.953702 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:35:36.955574 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:35:36.956652 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:35:36.957249 | orchestrator | 2025-06-01 22:35:36.958473 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-01 22:35:36.959485 | orchestrator | Sunday 01 June 2025 22:35:36 +0000 (0:00:01.226) 0:00:41.221 *********** 2025-06-01 22:35:37.039198 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:37.040677 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:37.045172 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:37.153921 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:37.154080 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:37.155195 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:37.155797 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:37.156535 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:37.268755 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:37.269173 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:37.270093 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:37.271459 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:37.271929 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:37.366382 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:37.367060 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:37.368565 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:37.371017 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:37.371040 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:37.486887 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:37.487524 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:37.488727 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:37.489523 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:37.490515 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:37.787396 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:37.787508 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:37.787522 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:37.788009 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:37.788034 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:39.159498 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:39.160289 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:39.161494 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-01 22:35:39.164190 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-01 22:35:39.164243 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-01 22:35:39.164256 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-01 22:35:39.164777 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:39.166081 | orchestrator | 2025-06-01 22:35:39.166879 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-01 22:35:39.167429 | orchestrator | Sunday 01 June 2025 22:35:39 +0000 (0:00:02.206) 0:00:43.427 *********** 2025-06-01 22:35:39.327582 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:39.416170 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:39.505028 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:39.601663 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:39.683922 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:39.825298 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:39.826225 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:39.827244 | orchestrator | 2025-06-01 22:35:39.830950 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-01 22:35:39.830966 | orchestrator | Sunday 01 June 2025 22:35:39 +0000 (0:00:00.669) 0:00:44.097 *********** 2025-06-01 22:35:40.000450 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:35:40.263237 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:35:40.354678 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:35:40.441831 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:35:40.527937 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:35:40.578298 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:35:40.578852 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:35:40.579582 | orchestrator | 2025-06-01 22:35:40.580851 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:35:40.580894 | orchestrator | 2025-06-01 22:35:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:35:40.580908 | orchestrator | 2025-06-01 22:35:40 | INFO  | Please wait and do not abort execution. 2025-06-01 22:35:40.581631 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 22:35:40.582736 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:35:40.583801 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:35:40.584786 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:35:40.585384 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:35:40.585647 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:35:40.586165 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-01 22:35:40.586916 | orchestrator | 2025-06-01 22:35:40.587423 | orchestrator | 2025-06-01 22:35:40.587747 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:35:40.588039 | orchestrator | Sunday 01 June 2025 22:35:40 +0000 (0:00:00.750) 0:00:44.848 *********** 2025-06-01 22:35:40.588288 | orchestrator | =============================================================================== 2025-06-01 22:35:40.588726 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.90s 2025-06-01 22:35:40.588992 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.82s 2025-06-01 22:35:40.589350 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.63s 2025-06-01 22:35:40.589590 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.42s 2025-06-01 22:35:40.590080 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.21s 2025-06-01 22:35:40.590188 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.15s 2025-06-01 22:35:40.590686 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.05s 2025-06-01 22:35:40.591079 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.85s 2025-06-01 22:35:40.591341 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2025-06-01 22:35:40.591709 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2025-06-01 22:35:40.591950 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-06-01 22:35:40.592350 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2025-06-01 22:35:40.592597 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2025-06-01 22:35:40.592862 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2025-06-01 22:35:40.593368 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2025-06-01 22:35:40.594261 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2025-06-01 22:35:40.594710 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2025-06-01 22:35:40.595104 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-06-01 22:35:40.595126 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-06-01 22:35:40.595548 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-06-01 22:35:41.220288 | orchestrator | + osism apply wireguard 2025-06-01 22:35:42.873339 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:35:42.873459 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:35:42.873475 | orchestrator | Registering Redlock._release_script 2025-06-01 22:35:42.943375 | orchestrator | 2025-06-01 22:35:42 | INFO  | Task 51168bf2-8ffd-484d-823d-88616dc2f795 (wireguard) was prepared for execution. 2025-06-01 22:35:42.943524 | orchestrator | 2025-06-01 22:35:42 | INFO  | It takes a moment until task 51168bf2-8ffd-484d-823d-88616dc2f795 (wireguard) has been started and output is visible here. 2025-06-01 22:35:47.117166 | orchestrator | 2025-06-01 22:35:47.117293 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-01 22:35:47.118344 | orchestrator | 2025-06-01 22:35:47.118954 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-01 22:35:47.119620 | orchestrator | Sunday 01 June 2025 22:35:47 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-06-01 22:35:48.729233 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:48.730367 | orchestrator | 2025-06-01 22:35:48.731646 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-01 22:35:48.732806 | orchestrator | Sunday 01 June 2025 22:35:48 +0000 (0:00:01.613) 0:00:01.843 *********** 2025-06-01 22:35:55.287964 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:55.289137 | orchestrator | 2025-06-01 22:35:55.290432 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-01 22:35:55.292193 | orchestrator | Sunday 01 June 2025 22:35:55 +0000 (0:00:06.557) 0:00:08.400 *********** 2025-06-01 22:35:55.893610 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:55.894581 | orchestrator | 2025-06-01 22:35:55.895842 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-01 22:35:55.897502 | orchestrator | Sunday 01 June 2025 22:35:55 +0000 (0:00:00.608) 0:00:09.008 *********** 2025-06-01 22:35:56.372618 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:56.373160 | orchestrator | 2025-06-01 22:35:56.373543 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-01 22:35:56.375484 | orchestrator | Sunday 01 June 2025 22:35:56 +0000 (0:00:00.477) 0:00:09.485 *********** 2025-06-01 22:35:56.919186 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:56.919841 | orchestrator | 2025-06-01 22:35:56.921978 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-01 22:35:56.922002 | orchestrator | Sunday 01 June 2025 22:35:56 +0000 (0:00:00.546) 0:00:10.032 *********** 2025-06-01 22:35:57.488717 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:57.489531 | orchestrator | 2025-06-01 22:35:57.490463 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-01 22:35:57.491275 | orchestrator | Sunday 01 June 2025 22:35:57 +0000 (0:00:00.566) 0:00:10.599 *********** 2025-06-01 22:35:57.914323 | orchestrator | ok: [testbed-manager] 2025-06-01 22:35:57.915552 | orchestrator | 2025-06-01 22:35:57.917812 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-01 22:35:57.918182 | orchestrator | Sunday 01 June 2025 22:35:57 +0000 (0:00:00.429) 0:00:11.028 *********** 2025-06-01 22:35:59.162806 | orchestrator | changed: [testbed-manager] 2025-06-01 22:35:59.162967 | orchestrator | 2025-06-01 22:35:59.164297 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-01 22:35:59.165257 | orchestrator | Sunday 01 June 2025 22:35:59 +0000 (0:00:01.246) 0:00:12.274 *********** 2025-06-01 22:36:00.105472 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-01 22:36:00.105621 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:00.105638 | orchestrator | 2025-06-01 22:36:00.108259 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-01 22:36:00.108324 | orchestrator | Sunday 01 June 2025 22:36:00 +0000 (0:00:00.941) 0:00:13.216 *********** 2025-06-01 22:36:01.834238 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:01.834335 | orchestrator | 2025-06-01 22:36:01.834930 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-01 22:36:01.835673 | orchestrator | Sunday 01 June 2025 22:36:01 +0000 (0:00:01.728) 0:00:14.944 *********** 2025-06-01 22:36:02.791979 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:02.793531 | orchestrator | 2025-06-01 22:36:02.794727 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:36:02.795494 | orchestrator | 2025-06-01 22:36:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:36:02.795774 | orchestrator | 2025-06-01 22:36:02 | INFO  | Please wait and do not abort execution. 2025-06-01 22:36:02.797033 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:36:02.797850 | orchestrator | 2025-06-01 22:36:02.798208 | orchestrator | 2025-06-01 22:36:02.799624 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:36:02.800436 | orchestrator | Sunday 01 June 2025 22:36:02 +0000 (0:00:00.962) 0:00:15.907 *********** 2025-06-01 22:36:02.801088 | orchestrator | =============================================================================== 2025-06-01 22:36:02.801464 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.56s 2025-06-01 22:36:02.802804 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-06-01 22:36:02.803102 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.61s 2025-06-01 22:36:02.803595 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2025-06-01 22:36:02.804038 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-06-01 22:36:02.804657 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-06-01 22:36:02.805516 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2025-06-01 22:36:02.806148 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.57s 2025-06-01 22:36:02.806832 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-06-01 22:36:02.807464 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2025-06-01 22:36:02.808232 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-06-01 22:36:03.447882 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-01 22:36:03.484738 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-01 22:36:03.484789 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-01 22:36:03.566875 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 182 0 --:--:-- --:--:-- --:--:-- 182 2025-06-01 22:36:03.583560 | orchestrator | + osism apply --environment custom workarounds 2025-06-01 22:36:05.263330 | orchestrator | 2025-06-01 22:36:05 | INFO  | Trying to run play workarounds in environment custom 2025-06-01 22:36:05.268089 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:36:05.268127 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:36:05.268139 | orchestrator | Registering Redlock._release_script 2025-06-01 22:36:05.337819 | orchestrator | 2025-06-01 22:36:05 | INFO  | Task 3472d4b9-5127-45f8-b0d3-fe5ad44725e2 (workarounds) was prepared for execution. 2025-06-01 22:36:05.337902 | orchestrator | 2025-06-01 22:36:05 | INFO  | It takes a moment until task 3472d4b9-5127-45f8-b0d3-fe5ad44725e2 (workarounds) has been started and output is visible here. 2025-06-01 22:36:09.570847 | orchestrator | 2025-06-01 22:36:09.572137 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 22:36:09.572258 | orchestrator | 2025-06-01 22:36:09.573522 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-01 22:36:09.575770 | orchestrator | Sunday 01 June 2025 22:36:09 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-06-01 22:36:09.751867 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-01 22:36:09.838622 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-01 22:36:09.925694 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-01 22:36:10.029146 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-01 22:36:10.232521 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-01 22:36:10.398280 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-01 22:36:10.398472 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-01 22:36:10.399489 | orchestrator | 2025-06-01 22:36:10.399781 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-01 22:36:10.400316 | orchestrator | 2025-06-01 22:36:10.400744 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-01 22:36:10.401056 | orchestrator | Sunday 01 June 2025 22:36:10 +0000 (0:00:00.831) 0:00:00.983 *********** 2025-06-01 22:36:12.909716 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:12.909823 | orchestrator | 2025-06-01 22:36:12.909896 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-01 22:36:12.910109 | orchestrator | 2025-06-01 22:36:12.910184 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-01 22:36:12.910623 | orchestrator | Sunday 01 June 2025 22:36:12 +0000 (0:00:02.507) 0:00:03.491 *********** 2025-06-01 22:36:14.799259 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:14.799576 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:14.800209 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:14.801612 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:14.802622 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:14.803138 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:14.805473 | orchestrator | 2025-06-01 22:36:14.806761 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-01 22:36:14.809830 | orchestrator | 2025-06-01 22:36:14.809853 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-01 22:36:14.809866 | orchestrator | Sunday 01 June 2025 22:36:14 +0000 (0:00:01.889) 0:00:05.380 *********** 2025-06-01 22:36:16.376940 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:36:16.377044 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:36:16.378277 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:36:16.378303 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:36:16.378315 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:36:16.378551 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-01 22:36:16.379087 | orchestrator | 2025-06-01 22:36:16.379363 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-01 22:36:16.379674 | orchestrator | Sunday 01 June 2025 22:36:16 +0000 (0:00:01.572) 0:00:06.953 *********** 2025-06-01 22:36:20.120689 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:20.124289 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:20.124404 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:20.124419 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:20.125086 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:20.125871 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:20.127937 | orchestrator | 2025-06-01 22:36:20.128602 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-01 22:36:20.129265 | orchestrator | Sunday 01 June 2025 22:36:20 +0000 (0:00:03.750) 0:00:10.704 *********** 2025-06-01 22:36:20.274695 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:20.358912 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:20.440738 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:20.530746 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:20.850294 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:20.851570 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:20.853126 | orchestrator | 2025-06-01 22:36:20.853957 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-01 22:36:20.855110 | orchestrator | 2025-06-01 22:36:20.857126 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-01 22:36:20.858065 | orchestrator | Sunday 01 June 2025 22:36:20 +0000 (0:00:00.728) 0:00:11.432 *********** 2025-06-01 22:36:22.496265 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:22.496716 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:22.499687 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:22.500774 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:22.502178 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:22.502823 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:22.504777 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:22.504872 | orchestrator | 2025-06-01 22:36:22.505655 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-01 22:36:22.505779 | orchestrator | Sunday 01 June 2025 22:36:22 +0000 (0:00:01.645) 0:00:13.078 *********** 2025-06-01 22:36:24.119943 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:24.120786 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:24.122101 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:24.123615 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:24.124518 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:24.125724 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:24.126808 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:24.128209 | orchestrator | 2025-06-01 22:36:24.129407 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-01 22:36:24.130913 | orchestrator | Sunday 01 June 2025 22:36:24 +0000 (0:00:01.621) 0:00:14.700 *********** 2025-06-01 22:36:25.590672 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:25.590875 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:25.593097 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:25.594837 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:25.597091 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:25.598722 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:25.598753 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:25.599955 | orchestrator | 2025-06-01 22:36:25.601191 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-01 22:36:25.602400 | orchestrator | Sunday 01 June 2025 22:36:25 +0000 (0:00:01.473) 0:00:16.173 *********** 2025-06-01 22:36:27.367560 | orchestrator | changed: [testbed-manager] 2025-06-01 22:36:27.368765 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:27.370498 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:27.370539 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:27.371570 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:27.372273 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:27.373479 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:27.374675 | orchestrator | 2025-06-01 22:36:27.375366 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-01 22:36:27.376044 | orchestrator | Sunday 01 June 2025 22:36:27 +0000 (0:00:01.774) 0:00:17.947 *********** 2025-06-01 22:36:27.549034 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:36:27.680745 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:27.757248 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:27.834639 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:27.923194 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:28.050150 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:28.050565 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:28.053494 | orchestrator | 2025-06-01 22:36:28.054571 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-01 22:36:28.056112 | orchestrator | 2025-06-01 22:36:28.057080 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-01 22:36:28.057725 | orchestrator | Sunday 01 June 2025 22:36:28 +0000 (0:00:00.685) 0:00:18.632 *********** 2025-06-01 22:36:30.579914 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:36:30.580057 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:36:30.580462 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:36:30.581287 | orchestrator | ok: [testbed-manager] 2025-06-01 22:36:30.581757 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:36:30.582592 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:36:30.584782 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:36:30.585164 | orchestrator | 2025-06-01 22:36:30.585429 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:36:30.585799 | orchestrator | 2025-06-01 22:36:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:36:30.585926 | orchestrator | 2025-06-01 22:36:30 | INFO  | Please wait and do not abort execution. 2025-06-01 22:36:30.587058 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:36:30.587390 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:30.587916 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:30.588482 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:30.588687 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:30.589172 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:30.589563 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:30.589866 | orchestrator | 2025-06-01 22:36:30.590363 | orchestrator | 2025-06-01 22:36:30.590668 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:36:30.591047 | orchestrator | Sunday 01 June 2025 22:36:30 +0000 (0:00:02.530) 0:00:21.163 *********** 2025-06-01 22:36:30.592110 | orchestrator | =============================================================================== 2025-06-01 22:36:30.592375 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.75s 2025-06-01 22:36:30.592856 | orchestrator | Install python3-docker -------------------------------------------------- 2.53s 2025-06-01 22:36:30.593367 | orchestrator | Apply netplan configuration --------------------------------------------- 2.51s 2025-06-01 22:36:30.593854 | orchestrator | Apply netplan configuration --------------------------------------------- 1.89s 2025-06-01 22:36:30.594592 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-06-01 22:36:30.594842 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-06-01 22:36:30.595231 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-06-01 22:36:30.595722 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2025-06-01 22:36:30.596150 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2025-06-01 22:36:30.596980 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2025-06-01 22:36:30.597287 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-06-01 22:36:30.598088 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2025-06-01 22:36:31.200730 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-01 22:36:32.910221 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:36:32.910379 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:36:32.910397 | orchestrator | Registering Redlock._release_script 2025-06-01 22:36:32.971063 | orchestrator | 2025-06-01 22:36:32 | INFO  | Task dd157800-cf97-4107-aab3-f0bc2c9ca9fe (reboot) was prepared for execution. 2025-06-01 22:36:32.971148 | orchestrator | 2025-06-01 22:36:32 | INFO  | It takes a moment until task dd157800-cf97-4107-aab3-f0bc2c9ca9fe (reboot) has been started and output is visible here. 2025-06-01 22:36:37.167252 | orchestrator | 2025-06-01 22:36:37.167852 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:36:37.169539 | orchestrator | 2025-06-01 22:36:37.170803 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:36:37.172442 | orchestrator | Sunday 01 June 2025 22:36:37 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-06-01 22:36:37.267174 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:37.267270 | orchestrator | 2025-06-01 22:36:37.267967 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:36:37.268942 | orchestrator | Sunday 01 June 2025 22:36:37 +0000 (0:00:00.100) 0:00:00.320 *********** 2025-06-01 22:36:38.213155 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:36:38.213417 | orchestrator | 2025-06-01 22:36:38.214547 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:36:38.214586 | orchestrator | Sunday 01 June 2025 22:36:38 +0000 (0:00:00.945) 0:00:01.266 *********** 2025-06-01 22:36:38.339912 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:36:38.340814 | orchestrator | 2025-06-01 22:36:38.343705 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:36:38.344123 | orchestrator | 2025-06-01 22:36:38.344721 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:36:38.345294 | orchestrator | Sunday 01 June 2025 22:36:38 +0000 (0:00:00.126) 0:00:01.393 *********** 2025-06-01 22:36:38.468974 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:38.469420 | orchestrator | 2025-06-01 22:36:38.470102 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:36:38.470701 | orchestrator | Sunday 01 June 2025 22:36:38 +0000 (0:00:00.123) 0:00:01.516 *********** 2025-06-01 22:36:39.085646 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:36:39.085893 | orchestrator | 2025-06-01 22:36:39.086689 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:36:39.088810 | orchestrator | Sunday 01 June 2025 22:36:39 +0000 (0:00:00.622) 0:00:02.139 *********** 2025-06-01 22:36:39.196088 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:36:39.196955 | orchestrator | 2025-06-01 22:36:39.199214 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:36:39.200178 | orchestrator | 2025-06-01 22:36:39.202120 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:36:39.202540 | orchestrator | Sunday 01 June 2025 22:36:39 +0000 (0:00:00.110) 0:00:02.250 *********** 2025-06-01 22:36:39.426115 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:39.426457 | orchestrator | 2025-06-01 22:36:39.427699 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:36:39.428918 | orchestrator | Sunday 01 June 2025 22:36:39 +0000 (0:00:00.230) 0:00:02.480 *********** 2025-06-01 22:36:40.069647 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:36:40.071163 | orchestrator | 2025-06-01 22:36:40.072178 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:36:40.074538 | orchestrator | Sunday 01 June 2025 22:36:40 +0000 (0:00:00.642) 0:00:03.123 *********** 2025-06-01 22:36:40.184492 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:36:40.185295 | orchestrator | 2025-06-01 22:36:40.185688 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:36:40.186645 | orchestrator | 2025-06-01 22:36:40.188106 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:36:40.189568 | orchestrator | Sunday 01 June 2025 22:36:40 +0000 (0:00:00.114) 0:00:03.237 *********** 2025-06-01 22:36:40.294612 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:40.295525 | orchestrator | 2025-06-01 22:36:40.295899 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:36:40.296854 | orchestrator | Sunday 01 June 2025 22:36:40 +0000 (0:00:00.111) 0:00:03.349 *********** 2025-06-01 22:36:40.954918 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:36:40.955627 | orchestrator | 2025-06-01 22:36:40.957130 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:36:40.957847 | orchestrator | Sunday 01 June 2025 22:36:40 +0000 (0:00:00.658) 0:00:04.008 *********** 2025-06-01 22:36:41.077421 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:36:41.078190 | orchestrator | 2025-06-01 22:36:41.081346 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:36:41.081401 | orchestrator | 2025-06-01 22:36:41.082150 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:36:41.082910 | orchestrator | Sunday 01 June 2025 22:36:41 +0000 (0:00:00.124) 0:00:04.132 *********** 2025-06-01 22:36:41.186132 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:41.187029 | orchestrator | 2025-06-01 22:36:41.188265 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:36:41.189063 | orchestrator | Sunday 01 June 2025 22:36:41 +0000 (0:00:00.107) 0:00:04.240 *********** 2025-06-01 22:36:41.883555 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:36:41.884433 | orchestrator | 2025-06-01 22:36:41.885345 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:36:41.886157 | orchestrator | Sunday 01 June 2025 22:36:41 +0000 (0:00:00.696) 0:00:04.936 *********** 2025-06-01 22:36:41.997203 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:36:41.997768 | orchestrator | 2025-06-01 22:36:41.999255 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-01 22:36:42.000030 | orchestrator | 2025-06-01 22:36:42.001635 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-01 22:36:42.003096 | orchestrator | Sunday 01 June 2025 22:36:41 +0000 (0:00:00.113) 0:00:05.049 *********** 2025-06-01 22:36:42.104856 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:42.104947 | orchestrator | 2025-06-01 22:36:42.106062 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-01 22:36:42.106792 | orchestrator | Sunday 01 June 2025 22:36:42 +0000 (0:00:00.109) 0:00:05.159 *********** 2025-06-01 22:36:42.765663 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:36:42.765858 | orchestrator | 2025-06-01 22:36:42.767188 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-01 22:36:42.767975 | orchestrator | Sunday 01 June 2025 22:36:42 +0000 (0:00:00.659) 0:00:05.819 *********** 2025-06-01 22:36:42.804526 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:36:42.805263 | orchestrator | 2025-06-01 22:36:42.805917 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:36:42.806421 | orchestrator | 2025-06-01 22:36:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:36:42.806678 | orchestrator | 2025-06-01 22:36:42 | INFO  | Please wait and do not abort execution. 2025-06-01 22:36:42.807573 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:42.808001 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:42.808629 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:42.808980 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:42.809846 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:42.811343 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:36:42.812467 | orchestrator | 2025-06-01 22:36:42.813691 | orchestrator | 2025-06-01 22:36:42.814726 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:36:42.815288 | orchestrator | Sunday 01 June 2025 22:36:42 +0000 (0:00:00.040) 0:00:05.860 *********** 2025-06-01 22:36:42.815839 | orchestrator | =============================================================================== 2025-06-01 22:36:42.816592 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.22s 2025-06-01 22:36:42.817126 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2025-06-01 22:36:42.818063 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-06-01 22:36:43.396551 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-01 22:36:45.061384 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:36:45.061518 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:36:45.061539 | orchestrator | Registering Redlock._release_script 2025-06-01 22:36:45.140787 | orchestrator | 2025-06-01 22:36:45 | INFO  | Task df4b38d7-b667-4257-9a32-418abd7d61aa (wait-for-connection) was prepared for execution. 2025-06-01 22:36:45.140905 | orchestrator | 2025-06-01 22:36:45 | INFO  | It takes a moment until task df4b38d7-b667-4257-9a32-418abd7d61aa (wait-for-connection) has been started and output is visible here. 2025-06-01 22:36:49.312625 | orchestrator | 2025-06-01 22:36:49.313754 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-01 22:36:49.317566 | orchestrator | 2025-06-01 22:36:49.317597 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-01 22:36:49.317610 | orchestrator | Sunday 01 June 2025 22:36:49 +0000 (0:00:00.241) 0:00:00.241 *********** 2025-06-01 22:37:01.032022 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:01.032192 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:01.032873 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:01.033689 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:01.035611 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:01.036245 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:01.037118 | orchestrator | 2025-06-01 22:37:01.038076 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:37:01.038767 | orchestrator | 2025-06-01 22:37:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:37:01.038812 | orchestrator | 2025-06-01 22:37:01 | INFO  | Please wait and do not abort execution. 2025-06-01 22:37:01.039783 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:01.040075 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:01.040584 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:01.040910 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:01.041386 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:01.041631 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:01.041999 | orchestrator | 2025-06-01 22:37:01.042248 | orchestrator | 2025-06-01 22:37:01.042570 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:37:01.042926 | orchestrator | Sunday 01 June 2025 22:37:01 +0000 (0:00:11.719) 0:00:11.961 *********** 2025-06-01 22:37:01.043439 | orchestrator | =============================================================================== 2025-06-01 22:37:01.044034 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.72s 2025-06-01 22:37:01.677351 | orchestrator | + osism apply hddtemp 2025-06-01 22:37:03.385380 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:37:03.385474 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:37:03.385489 | orchestrator | Registering Redlock._release_script 2025-06-01 22:37:03.448142 | orchestrator | 2025-06-01 22:37:03 | INFO  | Task 9efaaee0-0ccf-417d-b545-be32f03d9329 (hddtemp) was prepared for execution. 2025-06-01 22:37:03.448196 | orchestrator | 2025-06-01 22:37:03 | INFO  | It takes a moment until task 9efaaee0-0ccf-417d-b545-be32f03d9329 (hddtemp) has been started and output is visible here. 2025-06-01 22:37:07.645546 | orchestrator | 2025-06-01 22:37:07.648311 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-01 22:37:07.648615 | orchestrator | 2025-06-01 22:37:07.649440 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-01 22:37:07.649792 | orchestrator | Sunday 01 June 2025 22:37:07 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-01 22:37:07.808465 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:07.885016 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:07.961914 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:08.040040 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:08.250876 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:08.387157 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:08.390374 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:08.390406 | orchestrator | 2025-06-01 22:37:08.390419 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-01 22:37:08.391467 | orchestrator | Sunday 01 June 2025 22:37:08 +0000 (0:00:00.740) 0:00:01.013 *********** 2025-06-01 22:37:09.710741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:37:09.712682 | orchestrator | 2025-06-01 22:37:09.713692 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-01 22:37:09.715203 | orchestrator | Sunday 01 June 2025 22:37:09 +0000 (0:00:01.323) 0:00:02.337 *********** 2025-06-01 22:37:11.616479 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:11.618290 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:11.619104 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:11.620239 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:11.621312 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:11.622813 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:11.623606 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:11.624133 | orchestrator | 2025-06-01 22:37:11.625057 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-01 22:37:11.625850 | orchestrator | Sunday 01 June 2025 22:37:11 +0000 (0:00:01.907) 0:00:04.244 *********** 2025-06-01 22:37:12.254736 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:12.341892 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:12.785528 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:12.785742 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:12.787015 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:12.790088 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:12.790130 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:12.790142 | orchestrator | 2025-06-01 22:37:12.791390 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-01 22:37:12.792025 | orchestrator | Sunday 01 June 2025 22:37:12 +0000 (0:00:01.166) 0:00:05.411 *********** 2025-06-01 22:37:14.006877 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:14.007182 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:14.007892 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:14.009899 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:14.009926 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:14.011069 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:14.011514 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:14.012552 | orchestrator | 2025-06-01 22:37:14.013477 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-01 22:37:14.014083 | orchestrator | Sunday 01 June 2025 22:37:14 +0000 (0:00:01.223) 0:00:06.635 *********** 2025-06-01 22:37:14.474756 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:14.555055 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:14.632809 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:14.716688 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:14.858871 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:14.859889 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:14.861827 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:14.863081 | orchestrator | 2025-06-01 22:37:14.863117 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-01 22:37:14.864202 | orchestrator | Sunday 01 June 2025 22:37:14 +0000 (0:00:00.850) 0:00:07.485 *********** 2025-06-01 22:37:26.598265 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:26.599620 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:26.599651 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:26.601778 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:26.603517 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:26.604606 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:26.606298 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:26.606581 | orchestrator | 2025-06-01 22:37:26.607929 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-01 22:37:26.608186 | orchestrator | Sunday 01 June 2025 22:37:26 +0000 (0:00:11.738) 0:00:19.224 *********** 2025-06-01 22:37:28.017525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 22:37:28.018189 | orchestrator | 2025-06-01 22:37:28.019571 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-01 22:37:28.020367 | orchestrator | Sunday 01 June 2025 22:37:28 +0000 (0:00:01.417) 0:00:20.641 *********** 2025-06-01 22:37:29.879475 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:37:29.879646 | orchestrator | changed: [testbed-node-0] 2025-06-01 22:37:29.881677 | orchestrator | changed: [testbed-node-1] 2025-06-01 22:37:29.883315 | orchestrator | changed: [testbed-manager] 2025-06-01 22:37:29.884539 | orchestrator | changed: [testbed-node-2] 2025-06-01 22:37:29.885548 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:37:29.886757 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:37:29.887372 | orchestrator | 2025-06-01 22:37:29.888469 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:37:29.888925 | orchestrator | 2025-06-01 22:37:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:37:29.889558 | orchestrator | 2025-06-01 22:37:29 | INFO  | Please wait and do not abort execution. 2025-06-01 22:37:29.890733 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:37:29.891280 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:29.892400 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:29.893492 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:29.894723 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:29.895871 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:29.896103 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:29.896973 | orchestrator | 2025-06-01 22:37:29.897449 | orchestrator | 2025-06-01 22:37:29.898112 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:37:29.898761 | orchestrator | Sunday 01 June 2025 22:37:29 +0000 (0:00:01.865) 0:00:22.507 *********** 2025-06-01 22:37:29.899250 | orchestrator | =============================================================================== 2025-06-01 22:37:29.899619 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.74s 2025-06-01 22:37:29.900014 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2025-06-01 22:37:29.900716 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.87s 2025-06-01 22:37:29.900839 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-06-01 22:37:29.901217 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.32s 2025-06-01 22:37:29.901957 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2025-06-01 22:37:29.902255 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.17s 2025-06-01 22:37:29.904057 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2025-06-01 22:37:29.904734 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-06-01 22:37:30.527400 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-06-01 22:37:31.960265 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-01 22:37:31.960399 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-01 22:37:31.960414 | orchestrator | + local max_attempts=60 2025-06-01 22:37:31.960426 | orchestrator | + local name=ceph-ansible 2025-06-01 22:37:31.960438 | orchestrator | + local attempt_num=1 2025-06-01 22:37:31.960881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-01 22:37:32.000365 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:37:32.000435 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-01 22:37:32.000449 | orchestrator | + local max_attempts=60 2025-06-01 22:37:32.000460 | orchestrator | + local name=kolla-ansible 2025-06-01 22:37:32.000471 | orchestrator | + local attempt_num=1 2025-06-01 22:37:32.000482 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-01 22:37:32.032075 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:37:32.032138 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-01 22:37:32.032152 | orchestrator | + local max_attempts=60 2025-06-01 22:37:32.032164 | orchestrator | + local name=osism-ansible 2025-06-01 22:37:32.032175 | orchestrator | + local attempt_num=1 2025-06-01 22:37:32.032408 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-01 22:37:32.061767 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-01 22:37:32.061836 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-01 22:37:32.061849 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-01 22:37:32.234275 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-01 22:37:32.409423 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-01 22:37:32.599441 | orchestrator | ARA in osism-ansible already disabled. 2025-06-01 22:37:32.795658 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-01 22:37:32.796522 | orchestrator | + osism apply gather-facts 2025-06-01 22:37:34.516539 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:37:34.516660 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:37:34.516674 | orchestrator | Registering Redlock._release_script 2025-06-01 22:37:34.579953 | orchestrator | 2025-06-01 22:37:34 | INFO  | Task 2538cfa4-f56f-4749-b6ae-ee4fe61e941d (gather-facts) was prepared for execution. 2025-06-01 22:37:34.580069 | orchestrator | 2025-06-01 22:37:34 | INFO  | It takes a moment until task 2538cfa4-f56f-4749-b6ae-ee4fe61e941d (gather-facts) has been started and output is visible here. 2025-06-01 22:37:38.771307 | orchestrator | 2025-06-01 22:37:38.772248 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:37:38.775197 | orchestrator | 2025-06-01 22:37:38.775241 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:37:38.775254 | orchestrator | Sunday 01 June 2025 22:37:38 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-01 22:37:43.913056 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:37:43.914428 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:37:43.915025 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:37:43.917426 | orchestrator | ok: [testbed-manager] 2025-06-01 22:37:43.917465 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:37:43.917850 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:37:43.918536 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:37:43.919333 | orchestrator | 2025-06-01 22:37:43.919939 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 22:37:43.920928 | orchestrator | 2025-06-01 22:37:43.921609 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 22:37:43.921902 | orchestrator | Sunday 01 June 2025 22:37:43 +0000 (0:00:05.147) 0:00:05.398 *********** 2025-06-01 22:37:44.070128 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:37:44.149804 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:37:44.230098 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:37:44.307288 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:37:44.385948 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:37:44.432399 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:37:44.433540 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:37:44.434006 | orchestrator | 2025-06-01 22:37:44.435497 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:37:44.436838 | orchestrator | 2025-06-01 22:37:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:37:44.437266 | orchestrator | 2025-06-01 22:37:44 | INFO  | Please wait and do not abort execution. 2025-06-01 22:37:44.437918 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.438663 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.439332 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.439782 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.440360 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.440915 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.441298 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 22:37:44.441660 | orchestrator | 2025-06-01 22:37:44.442365 | orchestrator | 2025-06-01 22:37:44.442902 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:37:44.443700 | orchestrator | Sunday 01 June 2025 22:37:44 +0000 (0:00:00.518) 0:00:05.917 *********** 2025-06-01 22:37:44.444266 | orchestrator | =============================================================================== 2025-06-01 22:37:44.444523 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.15s 2025-06-01 22:37:44.445529 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-01 22:37:45.124997 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-01 22:37:45.139118 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-01 22:37:45.157785 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-01 22:37:45.177518 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-01 22:37:45.197558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-01 22:37:45.215987 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-01 22:37:45.232770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-01 22:37:45.253503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-01 22:37:45.272094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-01 22:37:45.291703 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-01 22:37:45.312285 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-01 22:37:45.334116 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-01 22:37:45.352010 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-01 22:37:45.371715 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-01 22:37:45.389307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-01 22:37:45.407548 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-01 22:37:45.425680 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-01 22:37:45.438905 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-01 22:37:45.450512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-01 22:37:45.464121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-01 22:37:45.477331 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-01 22:37:45.619388 | orchestrator | ok: Runtime: 0:18:50.022204 2025-06-01 22:37:45.739382 | 2025-06-01 22:37:45.739538 | TASK [Deploy services] 2025-06-01 22:37:46.277769 | orchestrator | skipping: Conditional result was False 2025-06-01 22:37:46.297613 | 2025-06-01 22:37:46.297802 | TASK [Deploy in a nutshell] 2025-06-01 22:37:47.014290 | orchestrator | + set -e 2025-06-01 22:37:47.014499 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-01 22:37:47.014524 | orchestrator | ++ export INTERACTIVE=false 2025-06-01 22:37:47.014545 | orchestrator | ++ INTERACTIVE=false 2025-06-01 22:37:47.014558 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-01 22:37:47.014571 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-01 22:37:47.014601 | orchestrator | + source /opt/manager-vars.sh 2025-06-01 22:37:47.014654 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-01 22:37:47.014683 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-01 22:37:47.014697 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-01 22:37:47.014712 | orchestrator | ++ CEPH_VERSION=reef 2025-06-01 22:37:47.014724 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-01 22:37:47.014743 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-01 22:37:47.014754 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-01 22:37:47.014775 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-01 22:37:47.014785 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-01 22:37:47.014806 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-01 22:37:47.014817 | orchestrator | ++ export ARA=false 2025-06-01 22:37:47.014828 | orchestrator | ++ ARA=false 2025-06-01 22:37:47.014840 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-01 22:37:47.014935 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-01 22:37:47.014950 | orchestrator | ++ export TEMPEST=false 2025-06-01 22:37:47.014961 | orchestrator | ++ TEMPEST=false 2025-06-01 22:37:47.014971 | orchestrator | ++ export IS_ZUUL=true 2025-06-01 22:37:47.014982 | orchestrator | ++ IS_ZUUL=true 2025-06-01 22:37:47.014993 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:37:47.015005 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.143 2025-06-01 22:37:47.015020 | orchestrator | ++ export EXTERNAL_API=false 2025-06-01 22:37:47.015031 | orchestrator | ++ EXTERNAL_API=false 2025-06-01 22:37:47.015041 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-01 22:37:47.015052 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-01 22:37:47.015171 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-01 22:37:47.015188 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-01 22:37:47.015221 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-01 22:37:47.015242 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-01 22:37:47.015257 | orchestrator | 2025-06-01 22:37:47.015269 | orchestrator | # PULL IMAGES 2025-06-01 22:37:47.015281 | orchestrator | 2025-06-01 22:37:47.015292 | orchestrator | + echo 2025-06-01 22:37:47.015303 | orchestrator | + echo '# PULL IMAGES' 2025-06-01 22:37:47.015314 | orchestrator | + echo 2025-06-01 22:37:47.017126 | orchestrator | ++ semver latest 7.0.0 2025-06-01 22:37:47.087504 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-01 22:37:47.087582 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-01 22:37:47.087596 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-01 22:37:48.835396 | orchestrator | 2025-06-01 22:37:48 | INFO  | Trying to run play pull-images in environment custom 2025-06-01 22:37:48.838952 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:37:48.838990 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:37:48.839002 | orchestrator | Registering Redlock._release_script 2025-06-01 22:37:48.907366 | orchestrator | 2025-06-01 22:37:48 | INFO  | Task 32a745c9-490a-407a-a97a-32086c6edcd6 (pull-images) was prepared for execution. 2025-06-01 22:37:48.907441 | orchestrator | 2025-06-01 22:37:48 | INFO  | It takes a moment until task 32a745c9-490a-407a-a97a-32086c6edcd6 (pull-images) has been started and output is visible here. 2025-06-01 22:37:52.986695 | orchestrator | 2025-06-01 22:37:52.987748 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-01 22:37:52.988058 | orchestrator | 2025-06-01 22:37:52.988923 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-01 22:37:52.989725 | orchestrator | Sunday 01 June 2025 22:37:52 +0000 (0:00:00.181) 0:00:00.181 *********** 2025-06-01 22:39:00.187844 | orchestrator | changed: [testbed-manager] 2025-06-01 22:39:00.188025 | orchestrator | 2025-06-01 22:39:00.188062 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-01 22:39:00.188397 | orchestrator | Sunday 01 June 2025 22:39:00 +0000 (0:01:07.202) 0:01:07.384 *********** 2025-06-01 22:39:58.636324 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-01 22:39:58.636916 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-01 22:39:58.637994 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-01 22:39:58.640176 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-01 22:39:58.640372 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-01 22:39:58.641555 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-01 22:39:58.642573 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-01 22:39:58.643443 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-01 22:39:58.644252 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-01 22:39:58.644530 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-01 22:39:58.645600 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-01 22:39:58.646124 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-01 22:39:58.646906 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-01 22:39:58.647702 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-01 22:39:58.648074 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-01 22:39:58.648643 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-01 22:39:58.649105 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-01 22:39:58.649644 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-01 22:39:58.650151 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-01 22:39:58.651426 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-01 22:39:58.652494 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-01 22:39:58.653477 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-01 22:39:58.654296 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-01 22:39:58.655565 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-01 22:39:58.656180 | orchestrator | 2025-06-01 22:39:58.656624 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:39:58.657155 | orchestrator | 2025-06-01 22:39:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:39:58.657334 | orchestrator | 2025-06-01 22:39:58 | INFO  | Please wait and do not abort execution. 2025-06-01 22:39:58.658451 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 22:39:58.659103 | orchestrator | 2025-06-01 22:39:58.659681 | orchestrator | 2025-06-01 22:39:58.660787 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:39:58.661230 | orchestrator | Sunday 01 June 2025 22:39:58 +0000 (0:00:58.449) 0:02:05.833 *********** 2025-06-01 22:39:58.661855 | orchestrator | =============================================================================== 2025-06-01 22:39:58.662657 | orchestrator | Pull keystone image ---------------------------------------------------- 67.20s 2025-06-01 22:39:58.663764 | orchestrator | Pull other images ------------------------------------------------------ 58.45s 2025-06-01 22:40:00.928776 | orchestrator | 2025-06-01 22:40:00 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-01 22:40:00.933099 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:40:00.933135 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:40:00.933147 | orchestrator | Registering Redlock._release_script 2025-06-01 22:40:00.992351 | orchestrator | 2025-06-01 22:40:00 | INFO  | Task fc0f2ce1-1c05-4d81-947c-d6ef1bc579e9 (wipe-partitions) was prepared for execution. 2025-06-01 22:40:00.992398 | orchestrator | 2025-06-01 22:40:00 | INFO  | It takes a moment until task fc0f2ce1-1c05-4d81-947c-d6ef1bc579e9 (wipe-partitions) has been started and output is visible here. 2025-06-01 22:40:05.156783 | orchestrator | 2025-06-01 22:40:05.160789 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-01 22:40:05.162687 | orchestrator | 2025-06-01 22:40:05.163577 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-01 22:40:05.163600 | orchestrator | Sunday 01 June 2025 22:40:05 +0000 (0:00:00.152) 0:00:00.152 *********** 2025-06-01 22:40:05.865786 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:40:05.865883 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:40:05.866468 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:40:05.866700 | orchestrator | 2025-06-01 22:40:05.867143 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-01 22:40:05.868319 | orchestrator | Sunday 01 June 2025 22:40:05 +0000 (0:00:00.709) 0:00:00.861 *********** 2025-06-01 22:40:06.053717 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:06.163220 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:06.164089 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:40:06.164108 | orchestrator | 2025-06-01 22:40:06.164117 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-01 22:40:06.164196 | orchestrator | Sunday 01 June 2025 22:40:06 +0000 (0:00:00.299) 0:00:01.160 *********** 2025-06-01 22:40:06.976647 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:40:06.976939 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:06.976951 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:40:06.977124 | orchestrator | 2025-06-01 22:40:06.977566 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-01 22:40:06.980319 | orchestrator | Sunday 01 June 2025 22:40:06 +0000 (0:00:00.810) 0:00:01.970 *********** 2025-06-01 22:40:07.143915 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:07.251674 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:07.251825 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:40:07.251834 | orchestrator | 2025-06-01 22:40:07.252059 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-01 22:40:07.252439 | orchestrator | Sunday 01 June 2025 22:40:07 +0000 (0:00:00.278) 0:00:02.249 *********** 2025-06-01 22:40:08.422925 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 22:40:08.423026 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 22:40:08.423147 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 22:40:08.424838 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 22:40:08.425477 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 22:40:08.426486 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 22:40:08.427504 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 22:40:08.428271 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 22:40:08.430084 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 22:40:08.430094 | orchestrator | 2025-06-01 22:40:08.430997 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-01 22:40:08.431666 | orchestrator | Sunday 01 June 2025 22:40:08 +0000 (0:00:01.171) 0:00:03.420 *********** 2025-06-01 22:40:09.797732 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 22:40:09.800174 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 22:40:09.800655 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 22:40:09.804336 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 22:40:09.804577 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 22:40:09.805191 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 22:40:09.805482 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 22:40:09.805892 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 22:40:09.810326 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 22:40:09.811534 | orchestrator | 2025-06-01 22:40:09.811548 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-01 22:40:09.812761 | orchestrator | Sunday 01 June 2025 22:40:09 +0000 (0:00:01.372) 0:00:04.792 *********** 2025-06-01 22:40:12.129506 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-01 22:40:12.289573 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-01 22:40:12.289638 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-01 22:40:12.289644 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-01 22:40:12.289672 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-01 22:40:12.289677 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-01 22:40:12.289682 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-01 22:40:12.289687 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-01 22:40:12.289691 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-01 22:40:12.289696 | orchestrator | 2025-06-01 22:40:12.289702 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-01 22:40:12.289708 | orchestrator | Sunday 01 June 2025 22:40:12 +0000 (0:00:02.336) 0:00:07.128 *********** 2025-06-01 22:40:12.746585 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:40:12.751354 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:40:12.752695 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:40:12.753144 | orchestrator | 2025-06-01 22:40:12.754876 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-01 22:40:12.756868 | orchestrator | Sunday 01 June 2025 22:40:12 +0000 (0:00:00.613) 0:00:07.742 *********** 2025-06-01 22:40:13.383394 | orchestrator | changed: [testbed-node-3] 2025-06-01 22:40:13.384001 | orchestrator | changed: [testbed-node-4] 2025-06-01 22:40:13.385211 | orchestrator | changed: [testbed-node-5] 2025-06-01 22:40:13.385909 | orchestrator | 2025-06-01 22:40:13.386697 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:40:13.387397 | orchestrator | 2025-06-01 22:40:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:40:13.387412 | orchestrator | 2025-06-01 22:40:13 | INFO  | Please wait and do not abort execution. 2025-06-01 22:40:13.388050 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:13.388731 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:13.389439 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:13.389664 | orchestrator | 2025-06-01 22:40:13.390738 | orchestrator | 2025-06-01 22:40:13.392149 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:40:13.393000 | orchestrator | Sunday 01 June 2025 22:40:13 +0000 (0:00:00.639) 0:00:08.381 *********** 2025-06-01 22:40:13.394331 | orchestrator | =============================================================================== 2025-06-01 22:40:13.395413 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.34s 2025-06-01 22:40:13.396119 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.37s 2025-06-01 22:40:13.397601 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2025-06-01 22:40:13.398488 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.81s 2025-06-01 22:40:13.398845 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.71s 2025-06-01 22:40:13.399265 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-06-01 22:40:13.400218 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-06-01 22:40:13.401131 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2025-06-01 22:40:13.401924 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-06-01 22:40:15.909001 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:40:15.909164 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:40:15.909183 | orchestrator | Registering Redlock._release_script 2025-06-01 22:40:15.973912 | orchestrator | 2025-06-01 22:40:15 | INFO  | Task 947ca58f-b768-4e29-aa75-4f0f9b11c535 (facts) was prepared for execution. 2025-06-01 22:40:15.974109 | orchestrator | 2025-06-01 22:40:15 | INFO  | It takes a moment until task 947ca58f-b768-4e29-aa75-4f0f9b11c535 (facts) has been started and output is visible here. 2025-06-01 22:40:20.428800 | orchestrator | 2025-06-01 22:40:20.429700 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 22:40:20.431478 | orchestrator | 2025-06-01 22:40:20.435678 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 22:40:20.436166 | orchestrator | Sunday 01 June 2025 22:40:20 +0000 (0:00:00.312) 0:00:00.312 *********** 2025-06-01 22:40:21.161143 | orchestrator | ok: [testbed-manager] 2025-06-01 22:40:21.640596 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:40:21.641987 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:40:21.643212 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:40:21.643640 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:21.644164 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:40:21.645007 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:40:21.645313 | orchestrator | 2025-06-01 22:40:21.645689 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 22:40:21.646114 | orchestrator | Sunday 01 June 2025 22:40:21 +0000 (0:00:01.210) 0:00:01.523 *********** 2025-06-01 22:40:21.821430 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:40:21.904941 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:40:21.989817 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:40:22.071965 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:40:22.153053 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:22.946471 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:22.947432 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:40:22.948323 | orchestrator | 2025-06-01 22:40:22.949920 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 22:40:22.950465 | orchestrator | 2025-06-01 22:40:22.951734 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 22:40:22.952559 | orchestrator | Sunday 01 June 2025 22:40:22 +0000 (0:00:01.306) 0:00:02.829 *********** 2025-06-01 22:40:27.664226 | orchestrator | ok: [testbed-node-1] 2025-06-01 22:40:27.664482 | orchestrator | ok: [testbed-node-0] 2025-06-01 22:40:27.665530 | orchestrator | ok: [testbed-manager] 2025-06-01 22:40:27.666161 | orchestrator | ok: [testbed-node-2] 2025-06-01 22:40:27.669840 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:40:27.670687 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:27.671322 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:40:27.671950 | orchestrator | 2025-06-01 22:40:27.672874 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 22:40:27.674204 | orchestrator | 2025-06-01 22:40:27.674600 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 22:40:27.677881 | orchestrator | Sunday 01 June 2025 22:40:27 +0000 (0:00:04.718) 0:00:07.548 *********** 2025-06-01 22:40:28.054810 | orchestrator | skipping: [testbed-manager] 2025-06-01 22:40:28.132173 | orchestrator | skipping: [testbed-node-0] 2025-06-01 22:40:28.214367 | orchestrator | skipping: [testbed-node-1] 2025-06-01 22:40:28.292713 | orchestrator | skipping: [testbed-node-2] 2025-06-01 22:40:28.372684 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:28.410323 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:28.411442 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:40:28.412834 | orchestrator | 2025-06-01 22:40:28.414978 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:40:28.415042 | orchestrator | 2025-06-01 22:40:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:40:28.415058 | orchestrator | 2025-06-01 22:40:28 | INFO  | Please wait and do not abort execution. 2025-06-01 22:40:28.416027 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.416844 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.418526 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.419954 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.420919 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.421945 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.423176 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 22:40:28.423634 | orchestrator | 2025-06-01 22:40:28.424080 | orchestrator | 2025-06-01 22:40:28.426713 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:40:28.428562 | orchestrator | Sunday 01 June 2025 22:40:28 +0000 (0:00:00.746) 0:00:08.295 *********** 2025-06-01 22:40:28.429328 | orchestrator | =============================================================================== 2025-06-01 22:40:28.430942 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-06-01 22:40:28.431766 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2025-06-01 22:40:28.432401 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2025-06-01 22:40:28.433342 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.75s 2025-06-01 22:40:30.969722 | orchestrator | 2025-06-01 22:40:30 | INFO  | Task f111cd85-c9cd-4f17-9f48-c2233465ffa1 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-01 22:40:30.969831 | orchestrator | 2025-06-01 22:40:30 | INFO  | It takes a moment until task f111cd85-c9cd-4f17-9f48-c2233465ffa1 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-01 22:40:35.975116 | orchestrator | 2025-06-01 22:40:35.975278 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 22:40:35.975479 | orchestrator | 2025-06-01 22:40:35.977272 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:40:35.978650 | orchestrator | Sunday 01 June 2025 22:40:35 +0000 (0:00:00.375) 0:00:00.375 *********** 2025-06-01 22:40:36.227204 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 22:40:36.228654 | orchestrator | 2025-06-01 22:40:36.229568 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:40:36.231503 | orchestrator | Sunday 01 June 2025 22:40:36 +0000 (0:00:00.252) 0:00:00.627 *********** 2025-06-01 22:40:36.598757 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:36.599417 | orchestrator | 2025-06-01 22:40:36.600506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:36.602470 | orchestrator | Sunday 01 June 2025 22:40:36 +0000 (0:00:00.372) 0:00:01.000 *********** 2025-06-01 22:40:37.033308 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-01 22:40:37.034263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-01 22:40:37.035558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-01 22:40:37.037414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-01 22:40:37.038233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-01 22:40:37.041084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-01 22:40:37.042919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-01 22:40:37.044019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-01 22:40:37.044702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-01 22:40:37.045348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-01 22:40:37.045843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-01 22:40:37.046377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-01 22:40:37.049265 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-01 22:40:37.049866 | orchestrator | 2025-06-01 22:40:37.051201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:37.051401 | orchestrator | Sunday 01 June 2025 22:40:37 +0000 (0:00:00.434) 0:00:01.434 *********** 2025-06-01 22:40:37.607312 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:37.609174 | orchestrator | 2025-06-01 22:40:37.609607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:37.610416 | orchestrator | Sunday 01 June 2025 22:40:37 +0000 (0:00:00.574) 0:00:02.009 *********** 2025-06-01 22:40:37.828181 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:37.829223 | orchestrator | 2025-06-01 22:40:37.830265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:37.830821 | orchestrator | Sunday 01 June 2025 22:40:37 +0000 (0:00:00.222) 0:00:02.231 *********** 2025-06-01 22:40:38.058303 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:38.058531 | orchestrator | 2025-06-01 22:40:38.060453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:38.060639 | orchestrator | Sunday 01 June 2025 22:40:38 +0000 (0:00:00.227) 0:00:02.458 *********** 2025-06-01 22:40:38.263607 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:38.263720 | orchestrator | 2025-06-01 22:40:38.264818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:38.265040 | orchestrator | Sunday 01 June 2025 22:40:38 +0000 (0:00:00.204) 0:00:02.663 *********** 2025-06-01 22:40:38.451332 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:38.451692 | orchestrator | 2025-06-01 22:40:38.454119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:38.454548 | orchestrator | Sunday 01 June 2025 22:40:38 +0000 (0:00:00.188) 0:00:02.851 *********** 2025-06-01 22:40:38.703837 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:38.707318 | orchestrator | 2025-06-01 22:40:38.719479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:38.724179 | orchestrator | Sunday 01 June 2025 22:40:38 +0000 (0:00:00.253) 0:00:03.105 *********** 2025-06-01 22:40:38.939639 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:38.939757 | orchestrator | 2025-06-01 22:40:38.941202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:38.941567 | orchestrator | Sunday 01 June 2025 22:40:38 +0000 (0:00:00.235) 0:00:03.341 *********** 2025-06-01 22:40:39.163769 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:39.165355 | orchestrator | 2025-06-01 22:40:39.167666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:39.168331 | orchestrator | Sunday 01 June 2025 22:40:39 +0000 (0:00:00.224) 0:00:03.566 *********** 2025-06-01 22:40:39.692647 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef) 2025-06-01 22:40:39.692818 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef) 2025-06-01 22:40:39.692839 | orchestrator | 2025-06-01 22:40:39.692852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:39.692933 | orchestrator | Sunday 01 June 2025 22:40:39 +0000 (0:00:00.524) 0:00:04.090 *********** 2025-06-01 22:40:40.319816 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4) 2025-06-01 22:40:40.319920 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4) 2025-06-01 22:40:40.319934 | orchestrator | 2025-06-01 22:40:40.319947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:40.320550 | orchestrator | Sunday 01 June 2025 22:40:40 +0000 (0:00:00.615) 0:00:04.706 *********** 2025-06-01 22:40:41.024698 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6) 2025-06-01 22:40:41.025093 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6) 2025-06-01 22:40:41.029281 | orchestrator | 2025-06-01 22:40:41.029942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:41.030219 | orchestrator | Sunday 01 June 2025 22:40:41 +0000 (0:00:00.721) 0:00:05.428 *********** 2025-06-01 22:40:41.700339 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7) 2025-06-01 22:40:41.701171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7) 2025-06-01 22:40:41.701442 | orchestrator | 2025-06-01 22:40:41.703156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:41.703513 | orchestrator | Sunday 01 June 2025 22:40:41 +0000 (0:00:00.676) 0:00:06.104 *********** 2025-06-01 22:40:42.507413 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:40:42.508236 | orchestrator | 2025-06-01 22:40:42.511771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:42.511799 | orchestrator | Sunday 01 June 2025 22:40:42 +0000 (0:00:00.803) 0:00:06.908 *********** 2025-06-01 22:40:42.901329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-01 22:40:42.902104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-01 22:40:42.904068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-01 22:40:42.904228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-01 22:40:42.904724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-01 22:40:42.905418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-01 22:40:42.906305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-01 22:40:42.907046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-01 22:40:42.909876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-01 22:40:42.910378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-01 22:40:42.913795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-01 22:40:42.913878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-01 22:40:42.914241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-01 22:40:42.914499 | orchestrator | 2025-06-01 22:40:42.914778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:42.915106 | orchestrator | Sunday 01 June 2025 22:40:42 +0000 (0:00:00.396) 0:00:07.304 *********** 2025-06-01 22:40:43.104969 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:43.106326 | orchestrator | 2025-06-01 22:40:43.106421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:43.107802 | orchestrator | Sunday 01 June 2025 22:40:43 +0000 (0:00:00.204) 0:00:07.509 *********** 2025-06-01 22:40:43.323716 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:43.326112 | orchestrator | 2025-06-01 22:40:43.326164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:43.326179 | orchestrator | Sunday 01 June 2025 22:40:43 +0000 (0:00:00.217) 0:00:07.726 *********** 2025-06-01 22:40:43.523775 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:43.523867 | orchestrator | 2025-06-01 22:40:43.523964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:43.524204 | orchestrator | Sunday 01 June 2025 22:40:43 +0000 (0:00:00.196) 0:00:07.923 *********** 2025-06-01 22:40:43.715058 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:43.715155 | orchestrator | 2025-06-01 22:40:43.715261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:43.715278 | orchestrator | Sunday 01 June 2025 22:40:43 +0000 (0:00:00.196) 0:00:08.119 *********** 2025-06-01 22:40:43.913126 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:43.914105 | orchestrator | 2025-06-01 22:40:43.914219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:43.914528 | orchestrator | Sunday 01 June 2025 22:40:43 +0000 (0:00:00.198) 0:00:08.317 *********** 2025-06-01 22:40:44.108580 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:44.109907 | orchestrator | 2025-06-01 22:40:44.114346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:44.115115 | orchestrator | Sunday 01 June 2025 22:40:44 +0000 (0:00:00.192) 0:00:08.510 *********** 2025-06-01 22:40:44.309588 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:44.309688 | orchestrator | 2025-06-01 22:40:44.310873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:44.311923 | orchestrator | Sunday 01 June 2025 22:40:44 +0000 (0:00:00.201) 0:00:08.712 *********** 2025-06-01 22:40:44.495494 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:44.496161 | orchestrator | 2025-06-01 22:40:44.497389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:44.500459 | orchestrator | Sunday 01 June 2025 22:40:44 +0000 (0:00:00.186) 0:00:08.898 *********** 2025-06-01 22:40:45.614163 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-01 22:40:45.614331 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-01 22:40:45.617142 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-01 22:40:45.617606 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-01 22:40:45.619954 | orchestrator | 2025-06-01 22:40:45.619978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:45.620015 | orchestrator | Sunday 01 June 2025 22:40:45 +0000 (0:00:01.118) 0:00:10.017 *********** 2025-06-01 22:40:45.835484 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:45.835718 | orchestrator | 2025-06-01 22:40:45.835742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:45.835797 | orchestrator | Sunday 01 June 2025 22:40:45 +0000 (0:00:00.222) 0:00:10.239 *********** 2025-06-01 22:40:46.076824 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:46.079348 | orchestrator | 2025-06-01 22:40:46.079402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:46.079415 | orchestrator | Sunday 01 June 2025 22:40:46 +0000 (0:00:00.239) 0:00:10.478 *********** 2025-06-01 22:40:46.278984 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:46.279454 | orchestrator | 2025-06-01 22:40:46.280555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:46.281163 | orchestrator | Sunday 01 June 2025 22:40:46 +0000 (0:00:00.202) 0:00:10.681 *********** 2025-06-01 22:40:46.472892 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:46.473109 | orchestrator | 2025-06-01 22:40:46.473132 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 22:40:46.473283 | orchestrator | Sunday 01 June 2025 22:40:46 +0000 (0:00:00.194) 0:00:10.875 *********** 2025-06-01 22:40:46.700271 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-01 22:40:46.700585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-01 22:40:46.700899 | orchestrator | 2025-06-01 22:40:46.700923 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 22:40:46.701350 | orchestrator | Sunday 01 June 2025 22:40:46 +0000 (0:00:00.224) 0:00:11.099 *********** 2025-06-01 22:40:46.898548 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:46.898758 | orchestrator | 2025-06-01 22:40:46.899240 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 22:40:46.899805 | orchestrator | Sunday 01 June 2025 22:40:46 +0000 (0:00:00.201) 0:00:11.301 *********** 2025-06-01 22:40:47.026575 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:47.026680 | orchestrator | 2025-06-01 22:40:47.026758 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 22:40:47.030368 | orchestrator | Sunday 01 June 2025 22:40:47 +0000 (0:00:00.128) 0:00:11.430 *********** 2025-06-01 22:40:47.160035 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:47.162904 | orchestrator | 2025-06-01 22:40:47.163239 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 22:40:47.166127 | orchestrator | Sunday 01 June 2025 22:40:47 +0000 (0:00:00.131) 0:00:11.561 *********** 2025-06-01 22:40:47.317924 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:47.318190 | orchestrator | 2025-06-01 22:40:47.321158 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 22:40:47.321762 | orchestrator | Sunday 01 June 2025 22:40:47 +0000 (0:00:00.160) 0:00:11.721 *********** 2025-06-01 22:40:47.479270 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '008ba5ef-cc9a-56f9-b375-6638a5870e2c'}}) 2025-06-01 22:40:47.479597 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b07b94-4d11-536c-9a45-349f1f6df87d'}}) 2025-06-01 22:40:47.479639 | orchestrator | 2025-06-01 22:40:47.481172 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 22:40:47.481711 | orchestrator | Sunday 01 June 2025 22:40:47 +0000 (0:00:00.157) 0:00:11.879 *********** 2025-06-01 22:40:47.616699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '008ba5ef-cc9a-56f9-b375-6638a5870e2c'}})  2025-06-01 22:40:47.618171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b07b94-4d11-536c-9a45-349f1f6df87d'}})  2025-06-01 22:40:47.618799 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:47.618889 | orchestrator | 2025-06-01 22:40:47.618974 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 22:40:47.619377 | orchestrator | Sunday 01 June 2025 22:40:47 +0000 (0:00:00.142) 0:00:12.021 *********** 2025-06-01 22:40:47.978762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '008ba5ef-cc9a-56f9-b375-6638a5870e2c'}})  2025-06-01 22:40:47.979958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b07b94-4d11-536c-9a45-349f1f6df87d'}})  2025-06-01 22:40:47.980350 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:47.980412 | orchestrator | 2025-06-01 22:40:47.980627 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 22:40:47.981101 | orchestrator | Sunday 01 June 2025 22:40:47 +0000 (0:00:00.361) 0:00:12.383 *********** 2025-06-01 22:40:48.124680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '008ba5ef-cc9a-56f9-b375-6638a5870e2c'}})  2025-06-01 22:40:48.125464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b07b94-4d11-536c-9a45-349f1f6df87d'}})  2025-06-01 22:40:48.125560 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:48.127159 | orchestrator | 2025-06-01 22:40:48.128079 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 22:40:48.128126 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.143) 0:00:12.527 *********** 2025-06-01 22:40:48.264914 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:48.265099 | orchestrator | 2025-06-01 22:40:48.266510 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 22:40:48.266811 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.137) 0:00:12.664 *********** 2025-06-01 22:40:48.413493 | orchestrator | ok: [testbed-node-3] 2025-06-01 22:40:48.413732 | orchestrator | 2025-06-01 22:40:48.414607 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 22:40:48.415857 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.153) 0:00:12.817 *********** 2025-06-01 22:40:48.544041 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:48.544171 | orchestrator | 2025-06-01 22:40:48.544281 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 22:40:48.544680 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.126) 0:00:12.944 *********** 2025-06-01 22:40:48.665541 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:48.665986 | orchestrator | 2025-06-01 22:40:48.669460 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 22:40:48.673275 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.125) 0:00:13.070 *********** 2025-06-01 22:40:48.800324 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:48.800452 | orchestrator | 2025-06-01 22:40:48.800469 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 22:40:48.800588 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.133) 0:00:13.203 *********** 2025-06-01 22:40:48.957319 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 22:40:48.957871 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:40:48.958552 | orchestrator |  "sdb": { 2025-06-01 22:40:48.963101 | orchestrator |  "osd_lvm_uuid": "008ba5ef-cc9a-56f9-b375-6638a5870e2c" 2025-06-01 22:40:48.963575 | orchestrator |  }, 2025-06-01 22:40:48.963792 | orchestrator |  "sdc": { 2025-06-01 22:40:48.964299 | orchestrator |  "osd_lvm_uuid": "21b07b94-4d11-536c-9a45-349f1f6df87d" 2025-06-01 22:40:48.964812 | orchestrator |  } 2025-06-01 22:40:48.965877 | orchestrator |  } 2025-06-01 22:40:48.965900 | orchestrator | } 2025-06-01 22:40:48.969197 | orchestrator | 2025-06-01 22:40:48.969665 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 22:40:48.970133 | orchestrator | Sunday 01 June 2025 22:40:48 +0000 (0:00:00.157) 0:00:13.361 *********** 2025-06-01 22:40:49.113552 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:49.113980 | orchestrator | 2025-06-01 22:40:49.114612 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 22:40:49.116191 | orchestrator | Sunday 01 June 2025 22:40:49 +0000 (0:00:00.154) 0:00:13.516 *********** 2025-06-01 22:40:49.237910 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:49.238893 | orchestrator | 2025-06-01 22:40:49.240398 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 22:40:49.240790 | orchestrator | Sunday 01 June 2025 22:40:49 +0000 (0:00:00.125) 0:00:13.641 *********** 2025-06-01 22:40:49.382203 | orchestrator | skipping: [testbed-node-3] 2025-06-01 22:40:49.382367 | orchestrator | 2025-06-01 22:40:49.382382 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 22:40:49.383899 | orchestrator | Sunday 01 June 2025 22:40:49 +0000 (0:00:00.142) 0:00:13.784 *********** 2025-06-01 22:40:49.595973 | orchestrator | changed: [testbed-node-3] => { 2025-06-01 22:40:49.596308 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 22:40:49.596392 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:40:49.597084 | orchestrator |  "sdb": { 2025-06-01 22:40:49.599576 | orchestrator |  "osd_lvm_uuid": "008ba5ef-cc9a-56f9-b375-6638a5870e2c" 2025-06-01 22:40:49.599644 | orchestrator |  }, 2025-06-01 22:40:49.599773 | orchestrator |  "sdc": { 2025-06-01 22:40:49.600195 | orchestrator |  "osd_lvm_uuid": "21b07b94-4d11-536c-9a45-349f1f6df87d" 2025-06-01 22:40:49.601384 | orchestrator |  } 2025-06-01 22:40:49.603489 | orchestrator |  }, 2025-06-01 22:40:49.603516 | orchestrator |  "lvm_volumes": [ 2025-06-01 22:40:49.603926 | orchestrator |  { 2025-06-01 22:40:49.604385 | orchestrator |  "data": "osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c", 2025-06-01 22:40:49.605585 | orchestrator |  "data_vg": "ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c" 2025-06-01 22:40:49.607300 | orchestrator |  }, 2025-06-01 22:40:49.607516 | orchestrator |  { 2025-06-01 22:40:49.607850 | orchestrator |  "data": "osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d", 2025-06-01 22:40:49.608274 | orchestrator |  "data_vg": "ceph-21b07b94-4d11-536c-9a45-349f1f6df87d" 2025-06-01 22:40:49.609043 | orchestrator |  } 2025-06-01 22:40:49.609555 | orchestrator |  ] 2025-06-01 22:40:49.610228 | orchestrator |  } 2025-06-01 22:40:49.611422 | orchestrator | } 2025-06-01 22:40:49.611445 | orchestrator | 2025-06-01 22:40:49.612622 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 22:40:49.613981 | orchestrator | Sunday 01 June 2025 22:40:49 +0000 (0:00:00.213) 0:00:13.997 *********** 2025-06-01 22:40:51.801964 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 22:40:51.803353 | orchestrator | 2025-06-01 22:40:51.804399 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 22:40:51.805380 | orchestrator | 2025-06-01 22:40:51.806688 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:40:51.810305 | orchestrator | Sunday 01 June 2025 22:40:51 +0000 (0:00:02.207) 0:00:16.205 *********** 2025-06-01 22:40:52.050267 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 22:40:52.051940 | orchestrator | 2025-06-01 22:40:52.055476 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:40:52.056391 | orchestrator | Sunday 01 June 2025 22:40:52 +0000 (0:00:00.247) 0:00:16.452 *********** 2025-06-01 22:40:52.290313 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:40:52.291731 | orchestrator | 2025-06-01 22:40:52.294114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:52.294155 | orchestrator | Sunday 01 June 2025 22:40:52 +0000 (0:00:00.240) 0:00:16.693 *********** 2025-06-01 22:40:52.685120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-01 22:40:52.686521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-01 22:40:52.687691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-01 22:40:52.689038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-01 22:40:52.690225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-01 22:40:52.691323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-01 22:40:52.692183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-01 22:40:52.692545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-01 22:40:52.693966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-01 22:40:52.695269 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-01 22:40:52.697322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-01 22:40:52.698104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-01 22:40:52.699333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-01 22:40:52.701233 | orchestrator | 2025-06-01 22:40:52.702457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:52.703196 | orchestrator | Sunday 01 June 2025 22:40:52 +0000 (0:00:00.391) 0:00:17.085 *********** 2025-06-01 22:40:52.916165 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:52.920579 | orchestrator | 2025-06-01 22:40:52.920650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:52.921817 | orchestrator | Sunday 01 June 2025 22:40:52 +0000 (0:00:00.233) 0:00:17.319 *********** 2025-06-01 22:40:53.157476 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:53.158911 | orchestrator | 2025-06-01 22:40:53.160849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:53.161716 | orchestrator | Sunday 01 June 2025 22:40:53 +0000 (0:00:00.241) 0:00:17.560 *********** 2025-06-01 22:40:53.348397 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:53.349361 | orchestrator | 2025-06-01 22:40:53.350741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:53.351773 | orchestrator | Sunday 01 June 2025 22:40:53 +0000 (0:00:00.189) 0:00:17.750 *********** 2025-06-01 22:40:53.540370 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:53.540502 | orchestrator | 2025-06-01 22:40:53.541294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:53.541802 | orchestrator | Sunday 01 June 2025 22:40:53 +0000 (0:00:00.193) 0:00:17.943 *********** 2025-06-01 22:40:54.144529 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:54.145155 | orchestrator | 2025-06-01 22:40:54.145796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:54.147263 | orchestrator | Sunday 01 June 2025 22:40:54 +0000 (0:00:00.603) 0:00:18.546 *********** 2025-06-01 22:40:54.353306 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:54.353970 | orchestrator | 2025-06-01 22:40:54.354925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:54.355624 | orchestrator | Sunday 01 June 2025 22:40:54 +0000 (0:00:00.209) 0:00:18.756 *********** 2025-06-01 22:40:54.569691 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:54.570607 | orchestrator | 2025-06-01 22:40:54.571769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:54.572455 | orchestrator | Sunday 01 June 2025 22:40:54 +0000 (0:00:00.217) 0:00:18.973 *********** 2025-06-01 22:40:54.772562 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:54.774677 | orchestrator | 2025-06-01 22:40:54.775245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:54.776427 | orchestrator | Sunday 01 June 2025 22:40:54 +0000 (0:00:00.201) 0:00:19.175 *********** 2025-06-01 22:40:55.217427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac) 2025-06-01 22:40:55.217692 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac) 2025-06-01 22:40:55.218718 | orchestrator | 2025-06-01 22:40:55.220943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:55.221518 | orchestrator | Sunday 01 June 2025 22:40:55 +0000 (0:00:00.444) 0:00:19.619 *********** 2025-06-01 22:40:55.646535 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb) 2025-06-01 22:40:55.647187 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb) 2025-06-01 22:40:55.649222 | orchestrator | 2025-06-01 22:40:55.649336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:55.650178 | orchestrator | Sunday 01 June 2025 22:40:55 +0000 (0:00:00.428) 0:00:20.048 *********** 2025-06-01 22:40:56.065779 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680) 2025-06-01 22:40:56.065876 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680) 2025-06-01 22:40:56.066494 | orchestrator | 2025-06-01 22:40:56.070421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:56.070522 | orchestrator | Sunday 01 June 2025 22:40:56 +0000 (0:00:00.420) 0:00:20.469 *********** 2025-06-01 22:40:56.524322 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d) 2025-06-01 22:40:56.524913 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d) 2025-06-01 22:40:56.526192 | orchestrator | 2025-06-01 22:40:56.527182 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:40:56.528105 | orchestrator | Sunday 01 June 2025 22:40:56 +0000 (0:00:00.457) 0:00:20.927 *********** 2025-06-01 22:40:56.851594 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:40:56.851687 | orchestrator | 2025-06-01 22:40:56.851704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:56.851775 | orchestrator | Sunday 01 June 2025 22:40:56 +0000 (0:00:00.325) 0:00:21.252 *********** 2025-06-01 22:40:57.217365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-01 22:40:57.218139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-01 22:40:57.221186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-01 22:40:57.221221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-01 22:40:57.221522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-01 22:40:57.222509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-01 22:40:57.223624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-01 22:40:57.224961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-01 22:40:57.225377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-01 22:40:57.226106 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-01 22:40:57.226968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-01 22:40:57.227423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-01 22:40:57.231089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-01 22:40:57.231117 | orchestrator | 2025-06-01 22:40:57.231130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:57.231141 | orchestrator | Sunday 01 June 2025 22:40:57 +0000 (0:00:00.367) 0:00:21.619 *********** 2025-06-01 22:40:57.429818 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:57.430882 | orchestrator | 2025-06-01 22:40:57.435290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:57.435334 | orchestrator | Sunday 01 June 2025 22:40:57 +0000 (0:00:00.212) 0:00:21.832 *********** 2025-06-01 22:40:58.102794 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:58.104346 | orchestrator | 2025-06-01 22:40:58.106066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:58.107294 | orchestrator | Sunday 01 June 2025 22:40:58 +0000 (0:00:00.670) 0:00:22.503 *********** 2025-06-01 22:40:58.312226 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:58.313388 | orchestrator | 2025-06-01 22:40:58.317940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:58.318895 | orchestrator | Sunday 01 June 2025 22:40:58 +0000 (0:00:00.210) 0:00:22.713 *********** 2025-06-01 22:40:58.529460 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:58.530617 | orchestrator | 2025-06-01 22:40:58.531919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:58.533287 | orchestrator | Sunday 01 June 2025 22:40:58 +0000 (0:00:00.217) 0:00:22.931 *********** 2025-06-01 22:40:58.745329 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:58.746688 | orchestrator | 2025-06-01 22:40:58.748111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:58.749186 | orchestrator | Sunday 01 June 2025 22:40:58 +0000 (0:00:00.217) 0:00:23.148 *********** 2025-06-01 22:40:58.955218 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:58.956444 | orchestrator | 2025-06-01 22:40:58.957355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:58.958472 | orchestrator | Sunday 01 June 2025 22:40:58 +0000 (0:00:00.210) 0:00:23.358 *********** 2025-06-01 22:40:59.200184 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:59.201521 | orchestrator | 2025-06-01 22:40:59.202803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:59.204258 | orchestrator | Sunday 01 June 2025 22:40:59 +0000 (0:00:00.243) 0:00:23.602 *********** 2025-06-01 22:40:59.400080 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:40:59.401239 | orchestrator | 2025-06-01 22:40:59.403071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:40:59.405174 | orchestrator | Sunday 01 June 2025 22:40:59 +0000 (0:00:00.200) 0:00:23.802 *********** 2025-06-01 22:41:00.037409 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-01 22:41:00.038247 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-01 22:41:00.040430 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-01 22:41:00.041351 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-01 22:41:00.042263 | orchestrator | 2025-06-01 22:41:00.043188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:00.044168 | orchestrator | Sunday 01 June 2025 22:41:00 +0000 (0:00:00.635) 0:00:24.437 *********** 2025-06-01 22:41:00.252349 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:00.253544 | orchestrator | 2025-06-01 22:41:00.254196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:00.255019 | orchestrator | Sunday 01 June 2025 22:41:00 +0000 (0:00:00.217) 0:00:24.655 *********** 2025-06-01 22:41:00.467082 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:00.468402 | orchestrator | 2025-06-01 22:41:00.470228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:00.471848 | orchestrator | Sunday 01 June 2025 22:41:00 +0000 (0:00:00.212) 0:00:24.868 *********** 2025-06-01 22:41:00.661675 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:00.661770 | orchestrator | 2025-06-01 22:41:00.663239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:00.664208 | orchestrator | Sunday 01 June 2025 22:41:00 +0000 (0:00:00.194) 0:00:25.062 *********** 2025-06-01 22:41:00.852861 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:00.853155 | orchestrator | 2025-06-01 22:41:00.853961 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 22:41:00.854864 | orchestrator | Sunday 01 June 2025 22:41:00 +0000 (0:00:00.193) 0:00:25.255 *********** 2025-06-01 22:41:01.202490 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-01 22:41:01.202655 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-01 22:41:01.203732 | orchestrator | 2025-06-01 22:41:01.204234 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 22:41:01.205101 | orchestrator | Sunday 01 June 2025 22:41:01 +0000 (0:00:00.349) 0:00:25.605 *********** 2025-06-01 22:41:01.353676 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:01.354735 | orchestrator | 2025-06-01 22:41:01.355199 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 22:41:01.355585 | orchestrator | Sunday 01 June 2025 22:41:01 +0000 (0:00:00.151) 0:00:25.757 *********** 2025-06-01 22:41:01.490798 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:01.496059 | orchestrator | 2025-06-01 22:41:01.496793 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 22:41:01.497250 | orchestrator | Sunday 01 June 2025 22:41:01 +0000 (0:00:00.135) 0:00:25.892 *********** 2025-06-01 22:41:01.629429 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:01.629528 | orchestrator | 2025-06-01 22:41:01.632038 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 22:41:01.633163 | orchestrator | Sunday 01 June 2025 22:41:01 +0000 (0:00:00.138) 0:00:26.031 *********** 2025-06-01 22:41:01.775790 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:41:01.776300 | orchestrator | 2025-06-01 22:41:01.777455 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 22:41:01.778718 | orchestrator | Sunday 01 June 2025 22:41:01 +0000 (0:00:00.145) 0:00:26.177 *********** 2025-06-01 22:41:01.940248 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a5796-5555-5d7b-8188-8712d414b3d1'}}) 2025-06-01 22:41:01.943050 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}}) 2025-06-01 22:41:01.944224 | orchestrator | 2025-06-01 22:41:01.946178 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 22:41:01.947474 | orchestrator | Sunday 01 June 2025 22:41:01 +0000 (0:00:00.166) 0:00:26.343 *********** 2025-06-01 22:41:02.087029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a5796-5555-5d7b-8188-8712d414b3d1'}})  2025-06-01 22:41:02.087319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}})  2025-06-01 22:41:02.088218 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:02.088843 | orchestrator | 2025-06-01 22:41:02.089913 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 22:41:02.090435 | orchestrator | Sunday 01 June 2025 22:41:02 +0000 (0:00:00.146) 0:00:26.489 *********** 2025-06-01 22:41:02.254422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a5796-5555-5d7b-8188-8712d414b3d1'}})  2025-06-01 22:41:02.255972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}})  2025-06-01 22:41:02.257531 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:02.258634 | orchestrator | 2025-06-01 22:41:02.259318 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 22:41:02.260478 | orchestrator | Sunday 01 June 2025 22:41:02 +0000 (0:00:00.166) 0:00:26.656 *********** 2025-06-01 22:41:02.402564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a5796-5555-5d7b-8188-8712d414b3d1'}})  2025-06-01 22:41:02.402738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}})  2025-06-01 22:41:02.404387 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:02.405509 | orchestrator | 2025-06-01 22:41:02.406702 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 22:41:02.407369 | orchestrator | Sunday 01 June 2025 22:41:02 +0000 (0:00:00.148) 0:00:26.804 *********** 2025-06-01 22:41:02.541234 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:41:02.541499 | orchestrator | 2025-06-01 22:41:02.543233 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 22:41:02.543257 | orchestrator | Sunday 01 June 2025 22:41:02 +0000 (0:00:00.138) 0:00:26.942 *********** 2025-06-01 22:41:02.687598 | orchestrator | ok: [testbed-node-4] 2025-06-01 22:41:02.688547 | orchestrator | 2025-06-01 22:41:02.688947 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 22:41:02.690233 | orchestrator | Sunday 01 June 2025 22:41:02 +0000 (0:00:00.147) 0:00:27.090 *********** 2025-06-01 22:41:02.822250 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:02.823125 | orchestrator | 2025-06-01 22:41:02.823684 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 22:41:02.826164 | orchestrator | Sunday 01 June 2025 22:41:02 +0000 (0:00:00.134) 0:00:27.225 *********** 2025-06-01 22:41:03.183250 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:03.183405 | orchestrator | 2025-06-01 22:41:03.184459 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 22:41:03.184828 | orchestrator | Sunday 01 June 2025 22:41:03 +0000 (0:00:00.360) 0:00:27.585 *********** 2025-06-01 22:41:03.311855 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:03.316554 | orchestrator | 2025-06-01 22:41:03.316598 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 22:41:03.316611 | orchestrator | Sunday 01 June 2025 22:41:03 +0000 (0:00:00.127) 0:00:27.713 *********** 2025-06-01 22:41:03.461380 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 22:41:03.463189 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:41:03.467403 | orchestrator |  "sdb": { 2025-06-01 22:41:03.468187 | orchestrator |  "osd_lvm_uuid": "e43a5796-5555-5d7b-8188-8712d414b3d1" 2025-06-01 22:41:03.469300 | orchestrator |  }, 2025-06-01 22:41:03.470416 | orchestrator |  "sdc": { 2025-06-01 22:41:03.471810 | orchestrator |  "osd_lvm_uuid": "3aa9cf12-e8a4-5f15-a0dc-00261f7d28af" 2025-06-01 22:41:03.473781 | orchestrator |  } 2025-06-01 22:41:03.475332 | orchestrator |  } 2025-06-01 22:41:03.476561 | orchestrator | } 2025-06-01 22:41:03.477424 | orchestrator | 2025-06-01 22:41:03.477928 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 22:41:03.479116 | orchestrator | Sunday 01 June 2025 22:41:03 +0000 (0:00:00.149) 0:00:27.863 *********** 2025-06-01 22:41:03.602274 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:03.607067 | orchestrator | 2025-06-01 22:41:03.607221 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 22:41:03.608685 | orchestrator | Sunday 01 June 2025 22:41:03 +0000 (0:00:00.139) 0:00:28.003 *********** 2025-06-01 22:41:03.728523 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:03.728932 | orchestrator | 2025-06-01 22:41:03.729150 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 22:41:03.729280 | orchestrator | Sunday 01 June 2025 22:41:03 +0000 (0:00:00.128) 0:00:28.131 *********** 2025-06-01 22:41:03.874834 | orchestrator | skipping: [testbed-node-4] 2025-06-01 22:41:03.875148 | orchestrator | 2025-06-01 22:41:03.875284 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 22:41:03.875548 | orchestrator | Sunday 01 June 2025 22:41:03 +0000 (0:00:00.147) 0:00:28.278 *********** 2025-06-01 22:41:04.069836 | orchestrator | changed: [testbed-node-4] => { 2025-06-01 22:41:04.070795 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 22:41:04.074125 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:41:04.074164 | orchestrator |  "sdb": { 2025-06-01 22:41:04.075241 | orchestrator |  "osd_lvm_uuid": "e43a5796-5555-5d7b-8188-8712d414b3d1" 2025-06-01 22:41:04.075847 | orchestrator |  }, 2025-06-01 22:41:04.076462 | orchestrator |  "sdc": { 2025-06-01 22:41:04.077428 | orchestrator |  "osd_lvm_uuid": "3aa9cf12-e8a4-5f15-a0dc-00261f7d28af" 2025-06-01 22:41:04.078672 | orchestrator |  } 2025-06-01 22:41:04.079884 | orchestrator |  }, 2025-06-01 22:41:04.080723 | orchestrator |  "lvm_volumes": [ 2025-06-01 22:41:04.081231 | orchestrator |  { 2025-06-01 22:41:04.081911 | orchestrator |  "data": "osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1", 2025-06-01 22:41:04.082712 | orchestrator |  "data_vg": "ceph-e43a5796-5555-5d7b-8188-8712d414b3d1" 2025-06-01 22:41:04.084076 | orchestrator |  }, 2025-06-01 22:41:04.084281 | orchestrator |  { 2025-06-01 22:41:04.085791 | orchestrator |  "data": "osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af", 2025-06-01 22:41:04.086367 | orchestrator |  "data_vg": "ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af" 2025-06-01 22:41:04.087441 | orchestrator |  } 2025-06-01 22:41:04.088637 | orchestrator |  ] 2025-06-01 22:41:04.089640 | orchestrator |  } 2025-06-01 22:41:04.090625 | orchestrator | } 2025-06-01 22:41:04.091613 | orchestrator | 2025-06-01 22:41:04.092149 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 22:41:04.092560 | orchestrator | Sunday 01 June 2025 22:41:04 +0000 (0:00:00.193) 0:00:28.472 *********** 2025-06-01 22:41:05.256750 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 22:41:05.261745 | orchestrator | 2025-06-01 22:41:05.262156 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-01 22:41:05.262564 | orchestrator | 2025-06-01 22:41:05.262767 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 22:41:05.263187 | orchestrator | Sunday 01 June 2025 22:41:05 +0000 (0:00:01.184) 0:00:29.657 *********** 2025-06-01 22:41:05.762844 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 22:41:05.762963 | orchestrator | 2025-06-01 22:41:05.763542 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 22:41:05.765657 | orchestrator | Sunday 01 June 2025 22:41:05 +0000 (0:00:00.507) 0:00:30.164 *********** 2025-06-01 22:41:06.497393 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:41:06.497819 | orchestrator | 2025-06-01 22:41:06.500445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:06.501414 | orchestrator | Sunday 01 June 2025 22:41:06 +0000 (0:00:00.734) 0:00:30.898 *********** 2025-06-01 22:41:06.881603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-01 22:41:06.882343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-01 22:41:06.884963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-01 22:41:06.885360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-01 22:41:06.885632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-01 22:41:06.886158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-01 22:41:06.886526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-01 22:41:06.887248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-01 22:41:06.887430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-01 22:41:06.887644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-01 22:41:06.887665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-01 22:41:06.888073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-01 22:41:06.888362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-01 22:41:06.888496 | orchestrator | 2025-06-01 22:41:06.888854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:06.890389 | orchestrator | Sunday 01 June 2025 22:41:06 +0000 (0:00:00.384) 0:00:31.282 *********** 2025-06-01 22:41:07.092829 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:07.095514 | orchestrator | 2025-06-01 22:41:07.097237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:07.098415 | orchestrator | Sunday 01 June 2025 22:41:07 +0000 (0:00:00.212) 0:00:31.495 *********** 2025-06-01 22:41:07.296462 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:07.298580 | orchestrator | 2025-06-01 22:41:07.299360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:07.301179 | orchestrator | Sunday 01 June 2025 22:41:07 +0000 (0:00:00.204) 0:00:31.700 *********** 2025-06-01 22:41:07.531623 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:07.535344 | orchestrator | 2025-06-01 22:41:07.535783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:07.536921 | orchestrator | Sunday 01 June 2025 22:41:07 +0000 (0:00:00.233) 0:00:31.933 *********** 2025-06-01 22:41:07.739382 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:07.741337 | orchestrator | 2025-06-01 22:41:07.742459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:07.743254 | orchestrator | Sunday 01 June 2025 22:41:07 +0000 (0:00:00.209) 0:00:32.142 *********** 2025-06-01 22:41:07.944474 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:07.944699 | orchestrator | 2025-06-01 22:41:07.945878 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:07.947178 | orchestrator | Sunday 01 June 2025 22:41:07 +0000 (0:00:00.204) 0:00:32.347 *********** 2025-06-01 22:41:08.164576 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:08.165034 | orchestrator | 2025-06-01 22:41:08.166575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:08.167264 | orchestrator | Sunday 01 June 2025 22:41:08 +0000 (0:00:00.217) 0:00:32.564 *********** 2025-06-01 22:41:08.342305 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:08.343300 | orchestrator | 2025-06-01 22:41:08.344922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:08.345149 | orchestrator | Sunday 01 June 2025 22:41:08 +0000 (0:00:00.179) 0:00:32.744 *********** 2025-06-01 22:41:08.546541 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:08.547086 | orchestrator | 2025-06-01 22:41:08.547354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:08.548592 | orchestrator | Sunday 01 June 2025 22:41:08 +0000 (0:00:00.205) 0:00:32.949 *********** 2025-06-01 22:41:09.188665 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce) 2025-06-01 22:41:09.188768 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce) 2025-06-01 22:41:09.189329 | orchestrator | 2025-06-01 22:41:09.190514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:09.190679 | orchestrator | Sunday 01 June 2025 22:41:09 +0000 (0:00:00.640) 0:00:33.589 *********** 2025-06-01 22:41:10.047512 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322) 2025-06-01 22:41:10.047699 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322) 2025-06-01 22:41:10.049051 | orchestrator | 2025-06-01 22:41:10.050836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:10.051505 | orchestrator | Sunday 01 June 2025 22:41:10 +0000 (0:00:00.860) 0:00:34.450 *********** 2025-06-01 22:41:10.495916 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5) 2025-06-01 22:41:10.496198 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5) 2025-06-01 22:41:10.498118 | orchestrator | 2025-06-01 22:41:10.498267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:10.500129 | orchestrator | Sunday 01 June 2025 22:41:10 +0000 (0:00:00.450) 0:00:34.900 *********** 2025-06-01 22:41:10.936871 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f) 2025-06-01 22:41:10.937479 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f) 2025-06-01 22:41:10.938547 | orchestrator | 2025-06-01 22:41:10.939927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 22:41:10.941042 | orchestrator | Sunday 01 June 2025 22:41:10 +0000 (0:00:00.438) 0:00:35.338 *********** 2025-06-01 22:41:11.269892 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 22:41:11.271233 | orchestrator | 2025-06-01 22:41:11.273365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:11.274449 | orchestrator | Sunday 01 June 2025 22:41:11 +0000 (0:00:00.333) 0:00:35.672 *********** 2025-06-01 22:41:11.650962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-01 22:41:11.651179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-01 22:41:11.653181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-01 22:41:11.653662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-01 22:41:11.654687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-01 22:41:11.655561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-01 22:41:11.656221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-01 22:41:11.656928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-01 22:41:11.657609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-01 22:41:11.657859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-01 22:41:11.659402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-01 22:41:11.660392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-01 22:41:11.661987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-01 22:41:11.663040 | orchestrator | 2025-06-01 22:41:11.664076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:11.664919 | orchestrator | Sunday 01 June 2025 22:41:11 +0000 (0:00:00.380) 0:00:36.052 *********** 2025-06-01 22:41:11.862534 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:11.863248 | orchestrator | 2025-06-01 22:41:11.864326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:11.865454 | orchestrator | Sunday 01 June 2025 22:41:11 +0000 (0:00:00.211) 0:00:36.264 *********** 2025-06-01 22:41:12.062309 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:12.062487 | orchestrator | 2025-06-01 22:41:12.063166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:12.063738 | orchestrator | Sunday 01 June 2025 22:41:12 +0000 (0:00:00.201) 0:00:36.465 *********** 2025-06-01 22:41:12.277066 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:12.277675 | orchestrator | 2025-06-01 22:41:12.279435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:12.281547 | orchestrator | Sunday 01 June 2025 22:41:12 +0000 (0:00:00.214) 0:00:36.679 *********** 2025-06-01 22:41:12.587398 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:12.588619 | orchestrator | 2025-06-01 22:41:12.590109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:12.592045 | orchestrator | Sunday 01 June 2025 22:41:12 +0000 (0:00:00.310) 0:00:36.990 *********** 2025-06-01 22:41:12.805866 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:12.807240 | orchestrator | 2025-06-01 22:41:12.809651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:12.809691 | orchestrator | Sunday 01 June 2025 22:41:12 +0000 (0:00:00.216) 0:00:37.207 *********** 2025-06-01 22:41:13.526473 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:13.527403 | orchestrator | 2025-06-01 22:41:13.528116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:13.528824 | orchestrator | Sunday 01 June 2025 22:41:13 +0000 (0:00:00.720) 0:00:37.927 *********** 2025-06-01 22:41:13.749285 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:13.749508 | orchestrator | 2025-06-01 22:41:13.750293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:13.751097 | orchestrator | Sunday 01 June 2025 22:41:13 +0000 (0:00:00.223) 0:00:38.152 *********** 2025-06-01 22:41:13.995254 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:13.995576 | orchestrator | 2025-06-01 22:41:13.996507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:13.997007 | orchestrator | Sunday 01 June 2025 22:41:13 +0000 (0:00:00.246) 0:00:38.398 *********** 2025-06-01 22:41:14.828500 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-01 22:41:14.829424 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-01 22:41:14.830152 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-01 22:41:14.830753 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-01 22:41:14.832093 | orchestrator | 2025-06-01 22:41:14.832778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:14.833511 | orchestrator | Sunday 01 June 2025 22:41:14 +0000 (0:00:00.831) 0:00:39.230 *********** 2025-06-01 22:41:15.063792 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:15.063891 | orchestrator | 2025-06-01 22:41:15.064688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:15.066284 | orchestrator | Sunday 01 June 2025 22:41:15 +0000 (0:00:00.236) 0:00:39.466 *********** 2025-06-01 22:41:15.377567 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:15.380712 | orchestrator | 2025-06-01 22:41:15.380859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:15.383135 | orchestrator | Sunday 01 June 2025 22:41:15 +0000 (0:00:00.312) 0:00:39.779 *********** 2025-06-01 22:41:15.593052 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:15.593381 | orchestrator | 2025-06-01 22:41:15.593410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 22:41:15.593505 | orchestrator | Sunday 01 June 2025 22:41:15 +0000 (0:00:00.215) 0:00:39.995 *********** 2025-06-01 22:41:15.819884 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:15.821210 | orchestrator | 2025-06-01 22:41:15.822717 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-01 22:41:15.822998 | orchestrator | Sunday 01 June 2025 22:41:15 +0000 (0:00:00.225) 0:00:40.220 *********** 2025-06-01 22:41:16.000787 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-01 22:41:16.000899 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-01 22:41:16.002190 | orchestrator | 2025-06-01 22:41:16.004373 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-01 22:41:16.005400 | orchestrator | Sunday 01 June 2025 22:41:15 +0000 (0:00:00.182) 0:00:40.402 *********** 2025-06-01 22:41:16.148536 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:16.148728 | orchestrator | 2025-06-01 22:41:16.150138 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-01 22:41:16.152186 | orchestrator | Sunday 01 June 2025 22:41:16 +0000 (0:00:00.148) 0:00:40.551 *********** 2025-06-01 22:41:16.283701 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:16.286368 | orchestrator | 2025-06-01 22:41:16.287541 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-01 22:41:16.288427 | orchestrator | Sunday 01 June 2025 22:41:16 +0000 (0:00:00.135) 0:00:40.686 *********** 2025-06-01 22:41:16.429183 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:16.430363 | orchestrator | 2025-06-01 22:41:16.431732 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-01 22:41:16.433076 | orchestrator | Sunday 01 June 2025 22:41:16 +0000 (0:00:00.145) 0:00:40.831 *********** 2025-06-01 22:41:16.803844 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:41:16.804879 | orchestrator | 2025-06-01 22:41:16.806146 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-01 22:41:16.807144 | orchestrator | Sunday 01 June 2025 22:41:16 +0000 (0:00:00.374) 0:00:41.205 *********** 2025-06-01 22:41:16.978834 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94e6c78b-35f7-5cb8-865b-5befb7b6694e'}}) 2025-06-01 22:41:16.980069 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0de39833-f6ff-5bf1-9ca3-735e32822edb'}}) 2025-06-01 22:41:16.983368 | orchestrator | 2025-06-01 22:41:16.983460 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-01 22:41:16.983477 | orchestrator | Sunday 01 June 2025 22:41:16 +0000 (0:00:00.175) 0:00:41.381 *********** 2025-06-01 22:41:17.133405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94e6c78b-35f7-5cb8-865b-5befb7b6694e'}})  2025-06-01 22:41:17.134476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0de39833-f6ff-5bf1-9ca3-735e32822edb'}})  2025-06-01 22:41:17.136141 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:17.137745 | orchestrator | 2025-06-01 22:41:17.138468 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-01 22:41:17.139858 | orchestrator | Sunday 01 June 2025 22:41:17 +0000 (0:00:00.153) 0:00:41.535 *********** 2025-06-01 22:41:17.308398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94e6c78b-35f7-5cb8-865b-5befb7b6694e'}})  2025-06-01 22:41:17.308543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0de39833-f6ff-5bf1-9ca3-735e32822edb'}})  2025-06-01 22:41:17.312361 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:17.312405 | orchestrator | 2025-06-01 22:41:17.312420 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-01 22:41:17.312449 | orchestrator | Sunday 01 June 2025 22:41:17 +0000 (0:00:00.172) 0:00:41.708 *********** 2025-06-01 22:41:17.491425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94e6c78b-35f7-5cb8-865b-5befb7b6694e'}})  2025-06-01 22:41:17.492374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0de39833-f6ff-5bf1-9ca3-735e32822edb'}})  2025-06-01 22:41:17.493106 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:17.493813 | orchestrator | 2025-06-01 22:41:17.495765 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-01 22:41:17.495801 | orchestrator | Sunday 01 June 2025 22:41:17 +0000 (0:00:00.186) 0:00:41.895 *********** 2025-06-01 22:41:17.645634 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:41:17.646194 | orchestrator | 2025-06-01 22:41:17.647323 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-01 22:41:17.648352 | orchestrator | Sunday 01 June 2025 22:41:17 +0000 (0:00:00.152) 0:00:42.048 *********** 2025-06-01 22:41:17.795347 | orchestrator | ok: [testbed-node-5] 2025-06-01 22:41:17.797856 | orchestrator | 2025-06-01 22:41:17.798177 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-01 22:41:17.799246 | orchestrator | Sunday 01 June 2025 22:41:17 +0000 (0:00:00.147) 0:00:42.195 *********** 2025-06-01 22:41:17.932667 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:17.934153 | orchestrator | 2025-06-01 22:41:17.934954 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-01 22:41:17.936168 | orchestrator | Sunday 01 June 2025 22:41:17 +0000 (0:00:00.139) 0:00:42.335 *********** 2025-06-01 22:41:18.070596 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:18.071615 | orchestrator | 2025-06-01 22:41:18.072801 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-01 22:41:18.073999 | orchestrator | Sunday 01 June 2025 22:41:18 +0000 (0:00:00.137) 0:00:42.473 *********** 2025-06-01 22:41:18.231253 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:18.231433 | orchestrator | 2025-06-01 22:41:18.232255 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-01 22:41:18.233031 | orchestrator | Sunday 01 June 2025 22:41:18 +0000 (0:00:00.159) 0:00:42.633 *********** 2025-06-01 22:41:18.369092 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 22:41:18.369886 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:41:18.372250 | orchestrator |  "sdb": { 2025-06-01 22:41:18.372303 | orchestrator |  "osd_lvm_uuid": "94e6c78b-35f7-5cb8-865b-5befb7b6694e" 2025-06-01 22:41:18.372707 | orchestrator |  }, 2025-06-01 22:41:18.373898 | orchestrator |  "sdc": { 2025-06-01 22:41:18.374349 | orchestrator |  "osd_lvm_uuid": "0de39833-f6ff-5bf1-9ca3-735e32822edb" 2025-06-01 22:41:18.375145 | orchestrator |  } 2025-06-01 22:41:18.375869 | orchestrator |  } 2025-06-01 22:41:18.376596 | orchestrator | } 2025-06-01 22:41:18.377363 | orchestrator | 2025-06-01 22:41:18.377820 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-01 22:41:18.379064 | orchestrator | Sunday 01 June 2025 22:41:18 +0000 (0:00:00.138) 0:00:42.771 *********** 2025-06-01 22:41:18.513446 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:18.514210 | orchestrator | 2025-06-01 22:41:18.514586 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-01 22:41:18.515195 | orchestrator | Sunday 01 June 2025 22:41:18 +0000 (0:00:00.145) 0:00:42.916 *********** 2025-06-01 22:41:18.886588 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:18.887401 | orchestrator | 2025-06-01 22:41:18.889664 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-01 22:41:18.889788 | orchestrator | Sunday 01 June 2025 22:41:18 +0000 (0:00:00.371) 0:00:43.287 *********** 2025-06-01 22:41:19.030516 | orchestrator | skipping: [testbed-node-5] 2025-06-01 22:41:19.032161 | orchestrator | 2025-06-01 22:41:19.033135 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-01 22:41:19.033524 | orchestrator | Sunday 01 June 2025 22:41:19 +0000 (0:00:00.142) 0:00:43.430 *********** 2025-06-01 22:41:19.232518 | orchestrator | changed: [testbed-node-5] => { 2025-06-01 22:41:19.233617 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-01 22:41:19.235408 | orchestrator |  "ceph_osd_devices": { 2025-06-01 22:41:19.236427 | orchestrator |  "sdb": { 2025-06-01 22:41:19.237218 | orchestrator |  "osd_lvm_uuid": "94e6c78b-35f7-5cb8-865b-5befb7b6694e" 2025-06-01 22:41:19.238215 | orchestrator |  }, 2025-06-01 22:41:19.239123 | orchestrator |  "sdc": { 2025-06-01 22:41:19.240079 | orchestrator |  "osd_lvm_uuid": "0de39833-f6ff-5bf1-9ca3-735e32822edb" 2025-06-01 22:41:19.240895 | orchestrator |  } 2025-06-01 22:41:19.241649 | orchestrator |  }, 2025-06-01 22:41:19.242562 | orchestrator |  "lvm_volumes": [ 2025-06-01 22:41:19.242869 | orchestrator |  { 2025-06-01 22:41:19.244604 | orchestrator |  "data": "osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e", 2025-06-01 22:41:19.245537 | orchestrator |  "data_vg": "ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e" 2025-06-01 22:41:19.246166 | orchestrator |  }, 2025-06-01 22:41:19.246927 | orchestrator |  { 2025-06-01 22:41:19.247519 | orchestrator |  "data": "osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb", 2025-06-01 22:41:19.248613 | orchestrator |  "data_vg": "ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb" 2025-06-01 22:41:19.249086 | orchestrator |  } 2025-06-01 22:41:19.250095 | orchestrator |  ] 2025-06-01 22:41:19.250551 | orchestrator |  } 2025-06-01 22:41:19.250952 | orchestrator | } 2025-06-01 22:41:19.251712 | orchestrator | 2025-06-01 22:41:19.252077 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-01 22:41:19.252493 | orchestrator | Sunday 01 June 2025 22:41:19 +0000 (0:00:00.204) 0:00:43.634 *********** 2025-06-01 22:41:20.234522 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 22:41:20.235456 | orchestrator | 2025-06-01 22:41:20.237433 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 22:41:20.237485 | orchestrator | 2025-06-01 22:41:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 22:41:20.237502 | orchestrator | 2025-06-01 22:41:20 | INFO  | Please wait and do not abort execution. 2025-06-01 22:41:20.239056 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 22:41:20.240672 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 22:41:20.241705 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-01 22:41:20.242638 | orchestrator | 2025-06-01 22:41:20.243756 | orchestrator | 2025-06-01 22:41:20.244355 | orchestrator | 2025-06-01 22:41:20.245503 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 22:41:20.246289 | orchestrator | Sunday 01 June 2025 22:41:20 +0000 (0:00:01.001) 0:00:44.636 *********** 2025-06-01 22:41:20.247324 | orchestrator | =============================================================================== 2025-06-01 22:41:20.247534 | orchestrator | Write configuration file ------------------------------------------------ 4.39s 2025-06-01 22:41:20.248493 | orchestrator | Get initial list of available block devices ----------------------------- 1.35s 2025-06-01 22:41:20.249205 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-06-01 22:41:20.250864 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2025-06-01 22:41:20.251681 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-06-01 22:41:20.252871 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.01s 2025-06-01 22:41:20.253644 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-06-01 22:41:20.254615 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-06-01 22:41:20.255673 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-06-01 22:41:20.257059 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.76s 2025-06-01 22:41:20.257758 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-06-01 22:41:20.259614 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-06-01 22:41:20.260643 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2025-06-01 22:41:20.261004 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.68s 2025-06-01 22:41:20.262541 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-06-01 22:41:20.263257 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-06-01 22:41:20.264092 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-01 22:41:20.265047 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-06-01 22:41:20.265784 | orchestrator | Print DB devices -------------------------------------------------------- 0.62s 2025-06-01 22:41:20.266457 | orchestrator | Set WAL devices config data --------------------------------------------- 0.62s 2025-06-01 22:41:32.747190 | orchestrator | Registering Redlock._acquired_script 2025-06-01 22:41:32.747304 | orchestrator | Registering Redlock._extend_script 2025-06-01 22:41:32.747319 | orchestrator | Registering Redlock._release_script 2025-06-01 22:41:32.807440 | orchestrator | 2025-06-01 22:41:32 | INFO  | Task 87f8c3cf-b432-4be0-9705-a690ec216d6a (sync inventory) is running in background. Output coming soon. 2025-06-01 23:41:35.424454 | orchestrator | 2025-06-01 23:41:35 | INFO  | Task 71835219-d106-4edf-a39a-a3baf42785b7 (ceph-create-lvm-devices) was prepared for execution. 2025-06-01 23:41:35.424580 | orchestrator | 2025-06-01 23:41:35 | INFO  | It takes a moment until task 71835219-d106-4edf-a39a-a3baf42785b7 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-01 23:41:39.703417 | orchestrator | 2025-06-01 23:41:39.704653 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 23:41:39.705064 | orchestrator | 2025-06-01 23:41:39.706248 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 23:41:39.708305 | orchestrator | Sunday 01 June 2025 23:41:39 +0000 (0:00:00.308) 0:00:00.308 *********** 2025-06-01 23:41:39.931826 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-01 23:41:39.933826 | orchestrator | 2025-06-01 23:41:39.934761 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 23:41:39.936207 | orchestrator | Sunday 01 June 2025 23:41:39 +0000 (0:00:00.234) 0:00:00.542 *********** 2025-06-01 23:41:40.164485 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:41:40.164584 | orchestrator | 2025-06-01 23:41:40.164991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:40.166004 | orchestrator | Sunday 01 June 2025 23:41:40 +0000 (0:00:00.230) 0:00:00.772 *********** 2025-06-01 23:41:40.562943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-01 23:41:40.563397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-01 23:41:40.564497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-01 23:41:40.566270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-01 23:41:40.567445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-01 23:41:40.569318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-01 23:41:40.570619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-01 23:41:40.572514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-01 23:41:40.573837 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-01 23:41:40.575013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-01 23:41:40.576291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-01 23:41:40.577368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-01 23:41:40.578562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-01 23:41:40.579713 | orchestrator | 2025-06-01 23:41:40.581269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:40.581494 | orchestrator | Sunday 01 June 2025 23:41:40 +0000 (0:00:00.401) 0:00:01.174 *********** 2025-06-01 23:41:41.039024 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:41.039130 | orchestrator | 2025-06-01 23:41:41.042480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:41.044064 | orchestrator | Sunday 01 June 2025 23:41:41 +0000 (0:00:00.471) 0:00:01.646 *********** 2025-06-01 23:41:41.237903 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:41.238007 | orchestrator | 2025-06-01 23:41:41.238074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:41.238688 | orchestrator | Sunday 01 June 2025 23:41:41 +0000 (0:00:00.199) 0:00:01.846 *********** 2025-06-01 23:41:41.444119 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:41.445131 | orchestrator | 2025-06-01 23:41:41.446315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:41.447544 | orchestrator | Sunday 01 June 2025 23:41:41 +0000 (0:00:00.206) 0:00:02.052 *********** 2025-06-01 23:41:41.628552 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:41.628643 | orchestrator | 2025-06-01 23:41:41.629363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:41.631026 | orchestrator | Sunday 01 June 2025 23:41:41 +0000 (0:00:00.184) 0:00:02.237 *********** 2025-06-01 23:41:41.828146 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:41.829612 | orchestrator | 2025-06-01 23:41:41.831445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:41.832380 | orchestrator | Sunday 01 June 2025 23:41:41 +0000 (0:00:00.199) 0:00:02.436 *********** 2025-06-01 23:41:42.036341 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:42.036461 | orchestrator | 2025-06-01 23:41:42.036477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:42.036572 | orchestrator | Sunday 01 June 2025 23:41:42 +0000 (0:00:00.207) 0:00:02.643 *********** 2025-06-01 23:41:42.238727 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:42.238820 | orchestrator | 2025-06-01 23:41:42.238833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:42.238892 | orchestrator | Sunday 01 June 2025 23:41:42 +0000 (0:00:00.204) 0:00:02.848 *********** 2025-06-01 23:41:42.425129 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:42.425350 | orchestrator | 2025-06-01 23:41:42.426237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:42.427336 | orchestrator | Sunday 01 June 2025 23:41:42 +0000 (0:00:00.187) 0:00:03.035 *********** 2025-06-01 23:41:42.832444 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef) 2025-06-01 23:41:42.833754 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef) 2025-06-01 23:41:42.835025 | orchestrator | 2025-06-01 23:41:42.836302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:42.837467 | orchestrator | Sunday 01 June 2025 23:41:42 +0000 (0:00:00.405) 0:00:03.441 *********** 2025-06-01 23:41:43.280749 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4) 2025-06-01 23:41:43.283726 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4) 2025-06-01 23:41:43.284741 | orchestrator | 2025-06-01 23:41:43.286084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:43.286690 | orchestrator | Sunday 01 June 2025 23:41:43 +0000 (0:00:00.449) 0:00:03.891 *********** 2025-06-01 23:41:43.947347 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6) 2025-06-01 23:41:43.947446 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6) 2025-06-01 23:41:43.947943 | orchestrator | 2025-06-01 23:41:43.948766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:43.950344 | orchestrator | Sunday 01 June 2025 23:41:43 +0000 (0:00:00.666) 0:00:04.557 *********** 2025-06-01 23:41:44.624719 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7) 2025-06-01 23:41:44.625143 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7) 2025-06-01 23:41:44.626515 | orchestrator | 2025-06-01 23:41:44.627336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:41:44.628601 | orchestrator | Sunday 01 June 2025 23:41:44 +0000 (0:00:00.678) 0:00:05.236 *********** 2025-06-01 23:41:45.379026 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 23:41:45.380320 | orchestrator | 2025-06-01 23:41:45.382564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:45.382593 | orchestrator | Sunday 01 June 2025 23:41:45 +0000 (0:00:00.752) 0:00:05.989 *********** 2025-06-01 23:41:45.822267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-01 23:41:45.823473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-01 23:41:45.826080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-01 23:41:45.827030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-01 23:41:45.828977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-01 23:41:45.830277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-01 23:41:45.830992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-01 23:41:45.831663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-01 23:41:45.832678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-01 23:41:45.833726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-01 23:41:45.834383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-01 23:41:45.835286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-01 23:41:45.835709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-01 23:41:45.836811 | orchestrator | 2025-06-01 23:41:45.837611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:45.838424 | orchestrator | Sunday 01 June 2025 23:41:45 +0000 (0:00:00.441) 0:00:06.431 *********** 2025-06-01 23:41:46.057785 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:46.058242 | orchestrator | 2025-06-01 23:41:46.060187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:46.060833 | orchestrator | Sunday 01 June 2025 23:41:46 +0000 (0:00:00.236) 0:00:06.667 *********** 2025-06-01 23:41:46.284550 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:46.284684 | orchestrator | 2025-06-01 23:41:46.284728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:46.285385 | orchestrator | Sunday 01 June 2025 23:41:46 +0000 (0:00:00.227) 0:00:06.895 *********** 2025-06-01 23:41:46.493923 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:46.494097 | orchestrator | 2025-06-01 23:41:46.494656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:46.495341 | orchestrator | Sunday 01 June 2025 23:41:46 +0000 (0:00:00.204) 0:00:07.100 *********** 2025-06-01 23:41:46.685799 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:46.686689 | orchestrator | 2025-06-01 23:41:46.687772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:46.688591 | orchestrator | Sunday 01 June 2025 23:41:46 +0000 (0:00:00.195) 0:00:07.295 *********** 2025-06-01 23:41:46.896532 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:46.898164 | orchestrator | 2025-06-01 23:41:46.899428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:46.900676 | orchestrator | Sunday 01 June 2025 23:41:46 +0000 (0:00:00.210) 0:00:07.505 *********** 2025-06-01 23:41:47.199565 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:47.200222 | orchestrator | 2025-06-01 23:41:47.201513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:47.202515 | orchestrator | Sunday 01 June 2025 23:41:47 +0000 (0:00:00.304) 0:00:07.810 *********** 2025-06-01 23:41:47.410056 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:47.410704 | orchestrator | 2025-06-01 23:41:47.411396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:47.415176 | orchestrator | Sunday 01 June 2025 23:41:47 +0000 (0:00:00.210) 0:00:08.021 *********** 2025-06-01 23:41:47.644716 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:47.645642 | orchestrator | 2025-06-01 23:41:47.647113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:47.653136 | orchestrator | Sunday 01 June 2025 23:41:47 +0000 (0:00:00.233) 0:00:08.255 *********** 2025-06-01 23:41:48.730239 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-01 23:41:48.730402 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-01 23:41:48.730678 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-01 23:41:48.731021 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-01 23:41:48.731687 | orchestrator | 2025-06-01 23:41:48.733425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:48.733521 | orchestrator | Sunday 01 June 2025 23:41:48 +0000 (0:00:01.085) 0:00:09.340 *********** 2025-06-01 23:41:48.930212 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:48.931144 | orchestrator | 2025-06-01 23:41:48.931763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:48.932164 | orchestrator | Sunday 01 June 2025 23:41:48 +0000 (0:00:00.200) 0:00:09.540 *********** 2025-06-01 23:41:49.124234 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:49.126168 | orchestrator | 2025-06-01 23:41:49.127413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:49.128353 | orchestrator | Sunday 01 June 2025 23:41:49 +0000 (0:00:00.193) 0:00:09.734 *********** 2025-06-01 23:41:49.318775 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:49.319030 | orchestrator | 2025-06-01 23:41:49.320402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:41:49.320446 | orchestrator | Sunday 01 June 2025 23:41:49 +0000 (0:00:00.195) 0:00:09.929 *********** 2025-06-01 23:41:49.522783 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:49.523396 | orchestrator | 2025-06-01 23:41:49.524361 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 23:41:49.525631 | orchestrator | Sunday 01 June 2025 23:41:49 +0000 (0:00:00.204) 0:00:10.133 *********** 2025-06-01 23:41:49.662011 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:49.662164 | orchestrator | 2025-06-01 23:41:49.662259 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 23:41:49.663118 | orchestrator | Sunday 01 June 2025 23:41:49 +0000 (0:00:00.139) 0:00:10.273 *********** 2025-06-01 23:41:49.853615 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '008ba5ef-cc9a-56f9-b375-6638a5870e2c'}}) 2025-06-01 23:41:49.854151 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b07b94-4d11-536c-9a45-349f1f6df87d'}}) 2025-06-01 23:41:49.855234 | orchestrator | 2025-06-01 23:41:49.856480 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 23:41:49.856707 | orchestrator | Sunday 01 June 2025 23:41:49 +0000 (0:00:00.190) 0:00:10.463 *********** 2025-06-01 23:41:52.118735 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'}) 2025-06-01 23:41:52.119066 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'}) 2025-06-01 23:41:52.120323 | orchestrator | 2025-06-01 23:41:52.121446 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 23:41:52.122303 | orchestrator | Sunday 01 June 2025 23:41:52 +0000 (0:00:02.264) 0:00:12.728 *********** 2025-06-01 23:41:52.267950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:52.268034 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:52.268416 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:52.269577 | orchestrator | 2025-06-01 23:41:52.270424 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 23:41:52.271364 | orchestrator | Sunday 01 June 2025 23:41:52 +0000 (0:00:00.149) 0:00:12.877 *********** 2025-06-01 23:41:53.776611 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'}) 2025-06-01 23:41:53.776819 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'}) 2025-06-01 23:41:53.777776 | orchestrator | 2025-06-01 23:41:53.779619 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 23:41:53.780883 | orchestrator | Sunday 01 June 2025 23:41:53 +0000 (0:00:01.506) 0:00:14.384 *********** 2025-06-01 23:41:53.928799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:53.928944 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:53.930260 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:53.931210 | orchestrator | 2025-06-01 23:41:53.931625 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 23:41:53.933550 | orchestrator | Sunday 01 June 2025 23:41:53 +0000 (0:00:00.150) 0:00:14.534 *********** 2025-06-01 23:41:54.068567 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:54.069242 | orchestrator | 2025-06-01 23:41:54.071410 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 23:41:54.072510 | orchestrator | Sunday 01 June 2025 23:41:54 +0000 (0:00:00.143) 0:00:14.678 *********** 2025-06-01 23:41:54.434921 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:54.437410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:54.437457 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:54.438253 | orchestrator | 2025-06-01 23:41:54.438719 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 23:41:54.439599 | orchestrator | Sunday 01 June 2025 23:41:54 +0000 (0:00:00.365) 0:00:15.043 *********** 2025-06-01 23:41:54.580162 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:54.580894 | orchestrator | 2025-06-01 23:41:54.582277 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 23:41:54.583660 | orchestrator | Sunday 01 June 2025 23:41:54 +0000 (0:00:00.146) 0:00:15.190 *********** 2025-06-01 23:41:54.734259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:54.734922 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:54.736329 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:54.738465 | orchestrator | 2025-06-01 23:41:54.738493 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 23:41:54.738810 | orchestrator | Sunday 01 June 2025 23:41:54 +0000 (0:00:00.154) 0:00:15.344 *********** 2025-06-01 23:41:54.867423 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:54.868153 | orchestrator | 2025-06-01 23:41:54.868921 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 23:41:54.869768 | orchestrator | Sunday 01 June 2025 23:41:54 +0000 (0:00:00.133) 0:00:15.478 *********** 2025-06-01 23:41:55.030802 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:55.031343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:55.032071 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:55.032943 | orchestrator | 2025-06-01 23:41:55.035010 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 23:41:55.035040 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.162) 0:00:15.641 *********** 2025-06-01 23:41:55.170789 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:41:55.171776 | orchestrator | 2025-06-01 23:41:55.173093 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 23:41:55.173617 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.140) 0:00:15.781 *********** 2025-06-01 23:41:55.334378 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:55.336096 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:55.336885 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:55.337707 | orchestrator | 2025-06-01 23:41:55.338117 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 23:41:55.338730 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.161) 0:00:15.943 *********** 2025-06-01 23:41:55.534961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:55.535293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:55.536905 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:55.537796 | orchestrator | 2025-06-01 23:41:55.538696 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 23:41:55.539654 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.202) 0:00:16.145 *********** 2025-06-01 23:41:55.688647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:41:55.689427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:41:55.690631 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:55.691053 | orchestrator | 2025-06-01 23:41:55.691605 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 23:41:55.692323 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.153) 0:00:16.299 *********** 2025-06-01 23:41:55.830723 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:55.832609 | orchestrator | 2025-06-01 23:41:55.832640 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 23:41:55.832653 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.141) 0:00:16.440 *********** 2025-06-01 23:41:55.987180 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:55.988212 | orchestrator | 2025-06-01 23:41:55.988468 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 23:41:55.988559 | orchestrator | Sunday 01 June 2025 23:41:55 +0000 (0:00:00.157) 0:00:16.598 *********** 2025-06-01 23:41:56.127690 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:56.127782 | orchestrator | 2025-06-01 23:41:56.128989 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 23:41:56.131240 | orchestrator | Sunday 01 June 2025 23:41:56 +0000 (0:00:00.139) 0:00:16.737 *********** 2025-06-01 23:41:56.480507 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 23:41:56.481706 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 23:41:56.483213 | orchestrator | } 2025-06-01 23:41:56.483239 | orchestrator | 2025-06-01 23:41:56.484013 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 23:41:56.484778 | orchestrator | Sunday 01 June 2025 23:41:56 +0000 (0:00:00.352) 0:00:17.090 *********** 2025-06-01 23:41:56.623666 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 23:41:56.624437 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 23:41:56.624955 | orchestrator | } 2025-06-01 23:41:56.626149 | orchestrator | 2025-06-01 23:41:56.626486 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 23:41:56.627316 | orchestrator | Sunday 01 June 2025 23:41:56 +0000 (0:00:00.143) 0:00:17.234 *********** 2025-06-01 23:41:56.774978 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 23:41:56.775577 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 23:41:56.777028 | orchestrator | } 2025-06-01 23:41:56.777717 | orchestrator | 2025-06-01 23:41:56.778720 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 23:41:56.780540 | orchestrator | Sunday 01 June 2025 23:41:56 +0000 (0:00:00.151) 0:00:17.385 *********** 2025-06-01 23:41:57.459344 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:41:57.459590 | orchestrator | 2025-06-01 23:41:57.460560 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 23:41:57.461584 | orchestrator | Sunday 01 June 2025 23:41:57 +0000 (0:00:00.683) 0:00:18.068 *********** 2025-06-01 23:41:57.977839 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:41:57.977992 | orchestrator | 2025-06-01 23:41:57.978148 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 23:41:57.980050 | orchestrator | Sunday 01 June 2025 23:41:57 +0000 (0:00:00.520) 0:00:18.589 *********** 2025-06-01 23:41:58.493445 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:41:58.494000 | orchestrator | 2025-06-01 23:41:58.495088 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 23:41:58.495587 | orchestrator | Sunday 01 June 2025 23:41:58 +0000 (0:00:00.512) 0:00:19.101 *********** 2025-06-01 23:41:58.636097 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:41:58.637448 | orchestrator | 2025-06-01 23:41:58.637823 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 23:41:58.639086 | orchestrator | Sunday 01 June 2025 23:41:58 +0000 (0:00:00.145) 0:00:19.247 *********** 2025-06-01 23:41:58.754308 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:58.755131 | orchestrator | 2025-06-01 23:41:58.755578 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 23:41:58.756841 | orchestrator | Sunday 01 June 2025 23:41:58 +0000 (0:00:00.117) 0:00:19.365 *********** 2025-06-01 23:41:58.866141 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:58.866282 | orchestrator | 2025-06-01 23:41:58.866485 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 23:41:58.867542 | orchestrator | Sunday 01 June 2025 23:41:58 +0000 (0:00:00.112) 0:00:19.477 *********** 2025-06-01 23:41:59.015221 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 23:41:59.015528 | orchestrator |  "vgs_report": { 2025-06-01 23:41:59.016628 | orchestrator |  "vg": [] 2025-06-01 23:41:59.017923 | orchestrator |  } 2025-06-01 23:41:59.019037 | orchestrator | } 2025-06-01 23:41:59.019063 | orchestrator | 2025-06-01 23:41:59.019924 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 23:41:59.020940 | orchestrator | Sunday 01 June 2025 23:41:59 +0000 (0:00:00.148) 0:00:19.626 *********** 2025-06-01 23:41:59.166667 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:59.167584 | orchestrator | 2025-06-01 23:41:59.168640 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 23:41:59.169669 | orchestrator | Sunday 01 June 2025 23:41:59 +0000 (0:00:00.151) 0:00:19.777 *********** 2025-06-01 23:41:59.302732 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:59.303301 | orchestrator | 2025-06-01 23:41:59.304410 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 23:41:59.304734 | orchestrator | Sunday 01 June 2025 23:41:59 +0000 (0:00:00.135) 0:00:19.913 *********** 2025-06-01 23:41:59.658484 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:59.659952 | orchestrator | 2025-06-01 23:41:59.660938 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 23:41:59.661720 | orchestrator | Sunday 01 June 2025 23:41:59 +0000 (0:00:00.356) 0:00:20.269 *********** 2025-06-01 23:41:59.801097 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:59.802743 | orchestrator | 2025-06-01 23:41:59.802775 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 23:41:59.804338 | orchestrator | Sunday 01 June 2025 23:41:59 +0000 (0:00:00.140) 0:00:20.410 *********** 2025-06-01 23:41:59.946649 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:41:59.947132 | orchestrator | 2025-06-01 23:41:59.948307 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 23:41:59.949596 | orchestrator | Sunday 01 June 2025 23:41:59 +0000 (0:00:00.147) 0:00:20.558 *********** 2025-06-01 23:42:00.093315 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.093453 | orchestrator | 2025-06-01 23:42:00.093468 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 23:42:00.093482 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.141) 0:00:20.699 *********** 2025-06-01 23:42:00.229616 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.229736 | orchestrator | 2025-06-01 23:42:00.230832 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 23:42:00.231747 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.138) 0:00:20.838 *********** 2025-06-01 23:42:00.373778 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.375125 | orchestrator | 2025-06-01 23:42:00.376586 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 23:42:00.377830 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.144) 0:00:20.982 *********** 2025-06-01 23:42:00.516996 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.517251 | orchestrator | 2025-06-01 23:42:00.518525 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 23:42:00.519353 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.141) 0:00:21.124 *********** 2025-06-01 23:42:00.689653 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.690807 | orchestrator | 2025-06-01 23:42:00.691827 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 23:42:00.692683 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.176) 0:00:21.300 *********** 2025-06-01 23:42:00.822617 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.823346 | orchestrator | 2025-06-01 23:42:00.824749 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 23:42:00.825364 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.131) 0:00:21.431 *********** 2025-06-01 23:42:00.964626 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:00.966286 | orchestrator | 2025-06-01 23:42:00.967037 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 23:42:00.967997 | orchestrator | Sunday 01 June 2025 23:42:00 +0000 (0:00:00.143) 0:00:21.575 *********** 2025-06-01 23:42:01.115255 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:01.116226 | orchestrator | 2025-06-01 23:42:01.117274 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 23:42:01.118917 | orchestrator | Sunday 01 June 2025 23:42:01 +0000 (0:00:00.150) 0:00:21.725 *********** 2025-06-01 23:42:01.252210 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:01.253400 | orchestrator | 2025-06-01 23:42:01.254307 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 23:42:01.255695 | orchestrator | Sunday 01 June 2025 23:42:01 +0000 (0:00:00.137) 0:00:21.863 *********** 2025-06-01 23:42:01.406811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:01.408672 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:01.409601 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:01.410788 | orchestrator | 2025-06-01 23:42:01.411641 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 23:42:01.412608 | orchestrator | Sunday 01 June 2025 23:42:01 +0000 (0:00:00.153) 0:00:22.016 *********** 2025-06-01 23:42:01.773475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:01.773734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:01.774153 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:01.774627 | orchestrator | 2025-06-01 23:42:01.775318 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 23:42:01.775827 | orchestrator | Sunday 01 June 2025 23:42:01 +0000 (0:00:00.367) 0:00:22.383 *********** 2025-06-01 23:42:01.946291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:01.946479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:01.947338 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:01.948043 | orchestrator | 2025-06-01 23:42:01.948511 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 23:42:01.949128 | orchestrator | Sunday 01 June 2025 23:42:01 +0000 (0:00:00.167) 0:00:22.551 *********** 2025-06-01 23:42:02.098420 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:02.098531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:02.098544 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:02.098994 | orchestrator | 2025-06-01 23:42:02.099595 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 23:42:02.100083 | orchestrator | Sunday 01 June 2025 23:42:02 +0000 (0:00:00.155) 0:00:22.706 *********** 2025-06-01 23:42:02.243734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:02.244774 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:02.245501 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:02.247953 | orchestrator | 2025-06-01 23:42:02.248844 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 23:42:02.249472 | orchestrator | Sunday 01 June 2025 23:42:02 +0000 (0:00:00.148) 0:00:22.854 *********** 2025-06-01 23:42:02.402646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:02.402747 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:02.402761 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:02.403445 | orchestrator | 2025-06-01 23:42:02.403471 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 23:42:02.405141 | orchestrator | Sunday 01 June 2025 23:42:02 +0000 (0:00:00.158) 0:00:23.012 *********** 2025-06-01 23:42:02.564141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:02.564825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:02.565171 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:02.566155 | orchestrator | 2025-06-01 23:42:02.566474 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 23:42:02.567201 | orchestrator | Sunday 01 June 2025 23:42:02 +0000 (0:00:00.162) 0:00:23.175 *********** 2025-06-01 23:42:02.712917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:02.713217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:02.714282 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:02.714815 | orchestrator | 2025-06-01 23:42:02.716605 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 23:42:02.716643 | orchestrator | Sunday 01 June 2025 23:42:02 +0000 (0:00:00.148) 0:00:23.324 *********** 2025-06-01 23:42:03.228263 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:42:03.228700 | orchestrator | 2025-06-01 23:42:03.229406 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 23:42:03.230469 | orchestrator | Sunday 01 June 2025 23:42:03 +0000 (0:00:00.514) 0:00:23.838 *********** 2025-06-01 23:42:03.737989 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:42:03.739488 | orchestrator | 2025-06-01 23:42:03.740825 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 23:42:03.742092 | orchestrator | Sunday 01 June 2025 23:42:03 +0000 (0:00:00.509) 0:00:24.347 *********** 2025-06-01 23:42:03.875358 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:42:03.876218 | orchestrator | 2025-06-01 23:42:03.877351 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 23:42:03.878744 | orchestrator | Sunday 01 June 2025 23:42:03 +0000 (0:00:00.139) 0:00:24.486 *********** 2025-06-01 23:42:04.062013 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'vg_name': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'}) 2025-06-01 23:42:04.062255 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'vg_name': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'}) 2025-06-01 23:42:04.063151 | orchestrator | 2025-06-01 23:42:04.064047 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 23:42:04.065615 | orchestrator | Sunday 01 June 2025 23:42:04 +0000 (0:00:00.185) 0:00:24.672 *********** 2025-06-01 23:42:04.208687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:04.209412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:04.209481 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:04.209552 | orchestrator | 2025-06-01 23:42:04.209860 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 23:42:04.210090 | orchestrator | Sunday 01 June 2025 23:42:04 +0000 (0:00:00.147) 0:00:24.820 *********** 2025-06-01 23:42:04.568291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:04.569287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:04.571133 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:04.571162 | orchestrator | 2025-06-01 23:42:04.571813 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 23:42:04.572429 | orchestrator | Sunday 01 June 2025 23:42:04 +0000 (0:00:00.357) 0:00:25.177 *********** 2025-06-01 23:42:04.730263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'})  2025-06-01 23:42:04.730367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'})  2025-06-01 23:42:04.730721 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:42:04.731541 | orchestrator | 2025-06-01 23:42:04.732243 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 23:42:04.732628 | orchestrator | Sunday 01 June 2025 23:42:04 +0000 (0:00:00.162) 0:00:25.340 *********** 2025-06-01 23:42:05.020024 | orchestrator | ok: [testbed-node-3] => { 2025-06-01 23:42:05.020956 | orchestrator |  "lvm_report": { 2025-06-01 23:42:05.022694 | orchestrator |  "lv": [ 2025-06-01 23:42:05.022919 | orchestrator |  { 2025-06-01 23:42:05.024843 | orchestrator |  "lv_name": "osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c", 2025-06-01 23:42:05.024932 | orchestrator |  "vg_name": "ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c" 2025-06-01 23:42:05.025604 | orchestrator |  }, 2025-06-01 23:42:05.026343 | orchestrator |  { 2025-06-01 23:42:05.027069 | orchestrator |  "lv_name": "osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d", 2025-06-01 23:42:05.027703 | orchestrator |  "vg_name": "ceph-21b07b94-4d11-536c-9a45-349f1f6df87d" 2025-06-01 23:42:05.028267 | orchestrator |  } 2025-06-01 23:42:05.028672 | orchestrator |  ], 2025-06-01 23:42:05.029469 | orchestrator |  "pv": [ 2025-06-01 23:42:05.030091 | orchestrator |  { 2025-06-01 23:42:05.030422 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 23:42:05.031108 | orchestrator |  "vg_name": "ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c" 2025-06-01 23:42:05.031305 | orchestrator |  }, 2025-06-01 23:42:05.031655 | orchestrator |  { 2025-06-01 23:42:05.032056 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 23:42:05.032509 | orchestrator |  "vg_name": "ceph-21b07b94-4d11-536c-9a45-349f1f6df87d" 2025-06-01 23:42:05.032892 | orchestrator |  } 2025-06-01 23:42:05.033269 | orchestrator |  ] 2025-06-01 23:42:05.033746 | orchestrator |  } 2025-06-01 23:42:05.034194 | orchestrator | } 2025-06-01 23:42:05.034544 | orchestrator | 2025-06-01 23:42:05.034982 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 23:42:05.035222 | orchestrator | 2025-06-01 23:42:05.035448 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 23:42:05.035779 | orchestrator | Sunday 01 June 2025 23:42:05 +0000 (0:00:00.290) 0:00:25.631 *********** 2025-06-01 23:42:05.283264 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-01 23:42:05.283624 | orchestrator | 2025-06-01 23:42:05.284591 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 23:42:05.285399 | orchestrator | Sunday 01 June 2025 23:42:05 +0000 (0:00:00.261) 0:00:25.893 *********** 2025-06-01 23:42:05.526862 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:05.527283 | orchestrator | 2025-06-01 23:42:05.528832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:05.529623 | orchestrator | Sunday 01 June 2025 23:42:05 +0000 (0:00:00.242) 0:00:26.136 *********** 2025-06-01 23:42:05.947691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-01 23:42:05.948411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-01 23:42:05.949838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-01 23:42:05.951384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-01 23:42:05.953076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-01 23:42:05.953810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-01 23:42:05.954152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-01 23:42:05.955177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-01 23:42:05.955621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-01 23:42:05.956154 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-01 23:42:05.956760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-01 23:42:05.957076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-01 23:42:05.957528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-01 23:42:05.958408 | orchestrator | 2025-06-01 23:42:05.958593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:05.959049 | orchestrator | Sunday 01 June 2025 23:42:05 +0000 (0:00:00.417) 0:00:26.553 *********** 2025-06-01 23:42:06.143001 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:06.144765 | orchestrator | 2025-06-01 23:42:06.146813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:06.147614 | orchestrator | Sunday 01 June 2025 23:42:06 +0000 (0:00:00.199) 0:00:26.753 *********** 2025-06-01 23:42:06.353592 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:06.355043 | orchestrator | 2025-06-01 23:42:06.356111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:06.357990 | orchestrator | Sunday 01 June 2025 23:42:06 +0000 (0:00:00.210) 0:00:26.964 *********** 2025-06-01 23:42:06.554858 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:06.555711 | orchestrator | 2025-06-01 23:42:06.557378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:06.558777 | orchestrator | Sunday 01 June 2025 23:42:06 +0000 (0:00:00.200) 0:00:27.164 *********** 2025-06-01 23:42:07.179137 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:07.179264 | orchestrator | 2025-06-01 23:42:07.180758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:07.182587 | orchestrator | Sunday 01 June 2025 23:42:07 +0000 (0:00:00.625) 0:00:27.789 *********** 2025-06-01 23:42:07.383196 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:07.383400 | orchestrator | 2025-06-01 23:42:07.384558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:07.385521 | orchestrator | Sunday 01 June 2025 23:42:07 +0000 (0:00:00.204) 0:00:27.994 *********** 2025-06-01 23:42:07.594264 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:07.594854 | orchestrator | 2025-06-01 23:42:07.595939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:07.597418 | orchestrator | Sunday 01 June 2025 23:42:07 +0000 (0:00:00.209) 0:00:28.203 *********** 2025-06-01 23:42:07.817358 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:07.818735 | orchestrator | 2025-06-01 23:42:07.820376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:07.821148 | orchestrator | Sunday 01 June 2025 23:42:07 +0000 (0:00:00.224) 0:00:28.428 *********** 2025-06-01 23:42:08.021042 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:08.021190 | orchestrator | 2025-06-01 23:42:08.022065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:08.023111 | orchestrator | Sunday 01 June 2025 23:42:08 +0000 (0:00:00.203) 0:00:28.631 *********** 2025-06-01 23:42:08.435132 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac) 2025-06-01 23:42:08.435432 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac) 2025-06-01 23:42:08.436488 | orchestrator | 2025-06-01 23:42:08.437503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:08.438738 | orchestrator | Sunday 01 June 2025 23:42:08 +0000 (0:00:00.413) 0:00:29.045 *********** 2025-06-01 23:42:08.880318 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb) 2025-06-01 23:42:08.882102 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb) 2025-06-01 23:42:08.883980 | orchestrator | 2025-06-01 23:42:08.884011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:08.884052 | orchestrator | Sunday 01 June 2025 23:42:08 +0000 (0:00:00.444) 0:00:29.490 *********** 2025-06-01 23:42:09.295332 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680) 2025-06-01 23:42:09.296108 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680) 2025-06-01 23:42:09.296279 | orchestrator | 2025-06-01 23:42:09.297391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:09.297604 | orchestrator | Sunday 01 June 2025 23:42:09 +0000 (0:00:00.416) 0:00:29.906 *********** 2025-06-01 23:42:09.724937 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d) 2025-06-01 23:42:09.725361 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d) 2025-06-01 23:42:09.726159 | orchestrator | 2025-06-01 23:42:09.726768 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:09.727523 | orchestrator | Sunday 01 June 2025 23:42:09 +0000 (0:00:00.428) 0:00:30.335 *********** 2025-06-01 23:42:10.104978 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 23:42:10.105311 | orchestrator | 2025-06-01 23:42:10.107298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:10.107332 | orchestrator | Sunday 01 June 2025 23:42:10 +0000 (0:00:00.379) 0:00:30.714 *********** 2025-06-01 23:42:10.732788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-01 23:42:10.733925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-01 23:42:10.735574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-01 23:42:10.736320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-01 23:42:10.737477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-01 23:42:10.738407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-01 23:42:10.739108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-01 23:42:10.739847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-01 23:42:10.740347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-01 23:42:10.740914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-01 23:42:10.741368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-01 23:42:10.741890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-01 23:42:10.742546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-01 23:42:10.742869 | orchestrator | 2025-06-01 23:42:10.743734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:10.744497 | orchestrator | Sunday 01 June 2025 23:42:10 +0000 (0:00:00.627) 0:00:31.342 *********** 2025-06-01 23:42:10.959134 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:10.959228 | orchestrator | 2025-06-01 23:42:10.960392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:10.961828 | orchestrator | Sunday 01 June 2025 23:42:10 +0000 (0:00:00.227) 0:00:31.569 *********** 2025-06-01 23:42:11.165077 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:11.166202 | orchestrator | 2025-06-01 23:42:11.167082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:11.167956 | orchestrator | Sunday 01 June 2025 23:42:11 +0000 (0:00:00.205) 0:00:31.775 *********** 2025-06-01 23:42:11.387299 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:11.387652 | orchestrator | 2025-06-01 23:42:11.388476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:11.388941 | orchestrator | Sunday 01 June 2025 23:42:11 +0000 (0:00:00.223) 0:00:31.998 *********** 2025-06-01 23:42:11.588768 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:11.590224 | orchestrator | 2025-06-01 23:42:11.592038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:11.592504 | orchestrator | Sunday 01 June 2025 23:42:11 +0000 (0:00:00.199) 0:00:32.198 *********** 2025-06-01 23:42:11.805914 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:11.807681 | orchestrator | 2025-06-01 23:42:11.809359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:11.810788 | orchestrator | Sunday 01 June 2025 23:42:11 +0000 (0:00:00.218) 0:00:32.416 *********** 2025-06-01 23:42:12.021801 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:12.022934 | orchestrator | 2025-06-01 23:42:12.023845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:12.024627 | orchestrator | Sunday 01 June 2025 23:42:12 +0000 (0:00:00.216) 0:00:32.633 *********** 2025-06-01 23:42:12.242619 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:12.243474 | orchestrator | 2025-06-01 23:42:12.244733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:12.245609 | orchestrator | Sunday 01 June 2025 23:42:12 +0000 (0:00:00.220) 0:00:32.853 *********** 2025-06-01 23:42:12.454238 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:12.454336 | orchestrator | 2025-06-01 23:42:12.454907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:12.455493 | orchestrator | Sunday 01 June 2025 23:42:12 +0000 (0:00:00.211) 0:00:33.065 *********** 2025-06-01 23:42:13.344409 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-01 23:42:13.345203 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-01 23:42:13.346600 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-01 23:42:13.348184 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-01 23:42:13.349363 | orchestrator | 2025-06-01 23:42:13.350474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:13.351039 | orchestrator | Sunday 01 June 2025 23:42:13 +0000 (0:00:00.888) 0:00:33.954 *********** 2025-06-01 23:42:13.542354 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:13.543389 | orchestrator | 2025-06-01 23:42:13.544715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:13.545464 | orchestrator | Sunday 01 June 2025 23:42:13 +0000 (0:00:00.199) 0:00:34.153 *********** 2025-06-01 23:42:13.744166 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:13.746097 | orchestrator | 2025-06-01 23:42:13.746133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:13.747394 | orchestrator | Sunday 01 June 2025 23:42:13 +0000 (0:00:00.199) 0:00:34.352 *********** 2025-06-01 23:42:14.482930 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:14.483642 | orchestrator | 2025-06-01 23:42:14.485115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:14.485339 | orchestrator | Sunday 01 June 2025 23:42:14 +0000 (0:00:00.741) 0:00:35.093 *********** 2025-06-01 23:42:14.692435 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:14.692539 | orchestrator | 2025-06-01 23:42:14.693068 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 23:42:14.693511 | orchestrator | Sunday 01 June 2025 23:42:14 +0000 (0:00:00.207) 0:00:35.301 *********** 2025-06-01 23:42:14.841534 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:14.841632 | orchestrator | 2025-06-01 23:42:14.842295 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 23:42:14.843929 | orchestrator | Sunday 01 June 2025 23:42:14 +0000 (0:00:00.150) 0:00:35.451 *********** 2025-06-01 23:42:15.043093 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e43a5796-5555-5d7b-8188-8712d414b3d1'}}) 2025-06-01 23:42:15.043221 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}}) 2025-06-01 23:42:15.044830 | orchestrator | 2025-06-01 23:42:15.045745 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 23:42:15.046221 | orchestrator | Sunday 01 June 2025 23:42:15 +0000 (0:00:00.202) 0:00:35.653 *********** 2025-06-01 23:42:17.164802 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'}) 2025-06-01 23:42:17.165070 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}) 2025-06-01 23:42:17.167491 | orchestrator | 2025-06-01 23:42:17.168221 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 23:42:17.169499 | orchestrator | Sunday 01 June 2025 23:42:17 +0000 (0:00:02.118) 0:00:37.772 *********** 2025-06-01 23:42:17.319838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:17.320927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:17.321839 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:17.322534 | orchestrator | 2025-06-01 23:42:17.324096 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 23:42:17.324120 | orchestrator | Sunday 01 June 2025 23:42:17 +0000 (0:00:00.158) 0:00:37.931 *********** 2025-06-01 23:42:18.587644 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'}) 2025-06-01 23:42:18.587754 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}) 2025-06-01 23:42:18.589373 | orchestrator | 2025-06-01 23:42:18.591100 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 23:42:18.592259 | orchestrator | Sunday 01 June 2025 23:42:18 +0000 (0:00:01.264) 0:00:39.195 *********** 2025-06-01 23:42:18.743378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:18.743751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:18.744096 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:18.744845 | orchestrator | 2025-06-01 23:42:18.745586 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 23:42:18.746139 | orchestrator | Sunday 01 June 2025 23:42:18 +0000 (0:00:00.156) 0:00:39.352 *********** 2025-06-01 23:42:18.880320 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:18.881206 | orchestrator | 2025-06-01 23:42:18.882336 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 23:42:18.883838 | orchestrator | Sunday 01 June 2025 23:42:18 +0000 (0:00:00.137) 0:00:39.490 *********** 2025-06-01 23:42:19.063922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:19.064017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:19.064031 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:19.064044 | orchestrator | 2025-06-01 23:42:19.064056 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 23:42:19.064069 | orchestrator | Sunday 01 June 2025 23:42:19 +0000 (0:00:00.180) 0:00:39.670 *********** 2025-06-01 23:42:19.186787 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:19.187426 | orchestrator | 2025-06-01 23:42:19.188815 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 23:42:19.189255 | orchestrator | Sunday 01 June 2025 23:42:19 +0000 (0:00:00.126) 0:00:39.797 *********** 2025-06-01 23:42:19.343949 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:19.344303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:19.345565 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:19.346636 | orchestrator | 2025-06-01 23:42:19.347296 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 23:42:19.347619 | orchestrator | Sunday 01 June 2025 23:42:19 +0000 (0:00:00.156) 0:00:39.954 *********** 2025-06-01 23:42:19.692067 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:19.693781 | orchestrator | 2025-06-01 23:42:19.694829 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 23:42:19.696337 | orchestrator | Sunday 01 June 2025 23:42:19 +0000 (0:00:00.348) 0:00:40.302 *********** 2025-06-01 23:42:19.856287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:19.856983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:19.857570 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:19.858074 | orchestrator | 2025-06-01 23:42:19.858484 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 23:42:19.858837 | orchestrator | Sunday 01 June 2025 23:42:19 +0000 (0:00:00.164) 0:00:40.467 *********** 2025-06-01 23:42:20.014782 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:20.014871 | orchestrator | 2025-06-01 23:42:20.015074 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 23:42:20.015386 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.157) 0:00:40.625 *********** 2025-06-01 23:42:20.160826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:20.161040 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:20.162376 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:20.163287 | orchestrator | 2025-06-01 23:42:20.164330 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 23:42:20.164994 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.145) 0:00:40.771 *********** 2025-06-01 23:42:20.331639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:20.331736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:20.332502 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:20.333602 | orchestrator | 2025-06-01 23:42:20.334204 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 23:42:20.336051 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.168) 0:00:40.940 *********** 2025-06-01 23:42:20.495456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:20.497283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:20.497797 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:20.498301 | orchestrator | 2025-06-01 23:42:20.499253 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 23:42:20.499273 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.166) 0:00:41.106 *********** 2025-06-01 23:42:20.626476 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:20.627132 | orchestrator | 2025-06-01 23:42:20.629011 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 23:42:20.629241 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.129) 0:00:41.236 *********** 2025-06-01 23:42:20.761465 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:20.761642 | orchestrator | 2025-06-01 23:42:20.762951 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 23:42:20.763581 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.134) 0:00:41.370 *********** 2025-06-01 23:42:20.893082 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:20.895829 | orchestrator | 2025-06-01 23:42:20.895865 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 23:42:20.897147 | orchestrator | Sunday 01 June 2025 23:42:20 +0000 (0:00:00.132) 0:00:41.503 *********** 2025-06-01 23:42:21.041284 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 23:42:21.042582 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 23:42:21.043659 | orchestrator | } 2025-06-01 23:42:21.044685 | orchestrator | 2025-06-01 23:42:21.045603 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 23:42:21.046228 | orchestrator | Sunday 01 June 2025 23:42:21 +0000 (0:00:00.146) 0:00:41.650 *********** 2025-06-01 23:42:21.189277 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 23:42:21.189579 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 23:42:21.190228 | orchestrator | } 2025-06-01 23:42:21.190390 | orchestrator | 2025-06-01 23:42:21.191412 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 23:42:21.192356 | orchestrator | Sunday 01 June 2025 23:42:21 +0000 (0:00:00.150) 0:00:41.800 *********** 2025-06-01 23:42:21.336413 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 23:42:21.337690 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 23:42:21.337786 | orchestrator | } 2025-06-01 23:42:21.339457 | orchestrator | 2025-06-01 23:42:21.341686 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 23:42:21.346306 | orchestrator | Sunday 01 June 2025 23:42:21 +0000 (0:00:00.145) 0:00:41.946 *********** 2025-06-01 23:42:22.087193 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:22.087364 | orchestrator | 2025-06-01 23:42:22.087787 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 23:42:22.088386 | orchestrator | Sunday 01 June 2025 23:42:22 +0000 (0:00:00.751) 0:00:42.697 *********** 2025-06-01 23:42:22.627524 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:22.627737 | orchestrator | 2025-06-01 23:42:22.629286 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 23:42:22.629804 | orchestrator | Sunday 01 June 2025 23:42:22 +0000 (0:00:00.538) 0:00:43.236 *********** 2025-06-01 23:42:23.163346 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:23.164076 | orchestrator | 2025-06-01 23:42:23.164109 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 23:42:23.164660 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.535) 0:00:43.771 *********** 2025-06-01 23:42:23.321431 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:23.322182 | orchestrator | 2025-06-01 23:42:23.323029 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 23:42:23.324684 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.159) 0:00:43.931 *********** 2025-06-01 23:42:23.437755 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:23.438877 | orchestrator | 2025-06-01 23:42:23.440143 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 23:42:23.442357 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.117) 0:00:44.049 *********** 2025-06-01 23:42:23.554945 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:23.555710 | orchestrator | 2025-06-01 23:42:23.557541 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 23:42:23.558109 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.115) 0:00:44.164 *********** 2025-06-01 23:42:23.698981 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 23:42:23.700006 | orchestrator |  "vgs_report": { 2025-06-01 23:42:23.701518 | orchestrator |  "vg": [] 2025-06-01 23:42:23.703290 | orchestrator |  } 2025-06-01 23:42:23.704134 | orchestrator | } 2025-06-01 23:42:23.705191 | orchestrator | 2025-06-01 23:42:23.705980 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 23:42:23.706853 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.145) 0:00:44.309 *********** 2025-06-01 23:42:23.838699 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:23.838836 | orchestrator | 2025-06-01 23:42:23.839453 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 23:42:23.840495 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.138) 0:00:44.448 *********** 2025-06-01 23:42:23.991863 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:23.992157 | orchestrator | 2025-06-01 23:42:23.993320 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 23:42:23.994111 | orchestrator | Sunday 01 June 2025 23:42:23 +0000 (0:00:00.149) 0:00:44.598 *********** 2025-06-01 23:42:24.142668 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:24.142777 | orchestrator | 2025-06-01 23:42:24.145940 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 23:42:24.146916 | orchestrator | Sunday 01 June 2025 23:42:24 +0000 (0:00:00.154) 0:00:44.752 *********** 2025-06-01 23:42:24.283147 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:24.284702 | orchestrator | 2025-06-01 23:42:24.287577 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 23:42:24.288787 | orchestrator | Sunday 01 June 2025 23:42:24 +0000 (0:00:00.140) 0:00:44.893 *********** 2025-06-01 23:42:24.428458 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:24.428545 | orchestrator | 2025-06-01 23:42:24.429583 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 23:42:24.431284 | orchestrator | Sunday 01 June 2025 23:42:24 +0000 (0:00:00.143) 0:00:45.036 *********** 2025-06-01 23:42:24.789630 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:24.793080 | orchestrator | 2025-06-01 23:42:24.794830 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 23:42:24.796110 | orchestrator | Sunday 01 June 2025 23:42:24 +0000 (0:00:00.364) 0:00:45.400 *********** 2025-06-01 23:42:24.942493 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:24.944066 | orchestrator | 2025-06-01 23:42:24.945799 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 23:42:24.946941 | orchestrator | Sunday 01 June 2025 23:42:24 +0000 (0:00:00.151) 0:00:45.552 *********** 2025-06-01 23:42:25.081107 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.082117 | orchestrator | 2025-06-01 23:42:25.082942 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 23:42:25.083255 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.140) 0:00:45.693 *********** 2025-06-01 23:42:25.225965 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.226266 | orchestrator | 2025-06-01 23:42:25.227433 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 23:42:25.228305 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.143) 0:00:45.836 *********** 2025-06-01 23:42:25.378363 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.379048 | orchestrator | 2025-06-01 23:42:25.380779 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 23:42:25.381559 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.150) 0:00:45.986 *********** 2025-06-01 23:42:25.507137 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.507380 | orchestrator | 2025-06-01 23:42:25.508164 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 23:42:25.508505 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.130) 0:00:46.116 *********** 2025-06-01 23:42:25.644038 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.645704 | orchestrator | 2025-06-01 23:42:25.646583 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 23:42:25.648204 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.137) 0:00:46.254 *********** 2025-06-01 23:42:25.784180 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.786511 | orchestrator | 2025-06-01 23:42:25.786844 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 23:42:25.788396 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.140) 0:00:46.394 *********** 2025-06-01 23:42:25.929633 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:25.929721 | orchestrator | 2025-06-01 23:42:25.930573 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 23:42:25.931493 | orchestrator | Sunday 01 June 2025 23:42:25 +0000 (0:00:00.145) 0:00:46.539 *********** 2025-06-01 23:42:26.084245 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:26.084403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:26.085140 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:26.087244 | orchestrator | 2025-06-01 23:42:26.088075 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 23:42:26.089418 | orchestrator | Sunday 01 June 2025 23:42:26 +0000 (0:00:00.154) 0:00:46.694 *********** 2025-06-01 23:42:26.240490 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:26.241280 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:26.243869 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:26.244089 | orchestrator | 2025-06-01 23:42:26.245380 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 23:42:26.246065 | orchestrator | Sunday 01 June 2025 23:42:26 +0000 (0:00:00.157) 0:00:46.851 *********** 2025-06-01 23:42:26.402165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:26.402556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:26.403836 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:26.404907 | orchestrator | 2025-06-01 23:42:26.405499 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 23:42:26.406150 | orchestrator | Sunday 01 June 2025 23:42:26 +0000 (0:00:00.159) 0:00:47.011 *********** 2025-06-01 23:42:26.794290 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:26.794384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:26.796199 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:26.796293 | orchestrator | 2025-06-01 23:42:26.796308 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 23:42:26.796714 | orchestrator | Sunday 01 June 2025 23:42:26 +0000 (0:00:00.390) 0:00:47.401 *********** 2025-06-01 23:42:26.959150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:26.962103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:26.962808 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:26.963854 | orchestrator | 2025-06-01 23:42:26.964697 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 23:42:26.965142 | orchestrator | Sunday 01 June 2025 23:42:26 +0000 (0:00:00.166) 0:00:47.568 *********** 2025-06-01 23:42:27.124704 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:27.126388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:27.127455 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:27.129045 | orchestrator | 2025-06-01 23:42:27.129750 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 23:42:27.130314 | orchestrator | Sunday 01 June 2025 23:42:27 +0000 (0:00:00.167) 0:00:47.735 *********** 2025-06-01 23:42:27.289103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:27.290772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:27.291691 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:27.292803 | orchestrator | 2025-06-01 23:42:27.293821 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 23:42:27.294887 | orchestrator | Sunday 01 June 2025 23:42:27 +0000 (0:00:00.164) 0:00:47.899 *********** 2025-06-01 23:42:27.440187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:27.440720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:27.441950 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:27.442867 | orchestrator | 2025-06-01 23:42:27.443599 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 23:42:27.444356 | orchestrator | Sunday 01 June 2025 23:42:27 +0000 (0:00:00.149) 0:00:48.049 *********** 2025-06-01 23:42:27.964966 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:27.965781 | orchestrator | 2025-06-01 23:42:27.966574 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 23:42:27.967498 | orchestrator | Sunday 01 June 2025 23:42:27 +0000 (0:00:00.526) 0:00:48.575 *********** 2025-06-01 23:42:28.500515 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:28.500615 | orchestrator | 2025-06-01 23:42:28.500631 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 23:42:28.500644 | orchestrator | Sunday 01 June 2025 23:42:28 +0000 (0:00:00.534) 0:00:49.110 *********** 2025-06-01 23:42:28.650432 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:42:28.650639 | orchestrator | 2025-06-01 23:42:28.651422 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 23:42:28.653660 | orchestrator | Sunday 01 June 2025 23:42:28 +0000 (0:00:00.150) 0:00:49.260 *********** 2025-06-01 23:42:28.823114 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'vg_name': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}) 2025-06-01 23:42:28.823605 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'vg_name': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'}) 2025-06-01 23:42:28.824800 | orchestrator | 2025-06-01 23:42:28.825772 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 23:42:28.826493 | orchestrator | Sunday 01 June 2025 23:42:28 +0000 (0:00:00.172) 0:00:49.433 *********** 2025-06-01 23:42:28.982796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:28.983548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:28.984280 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:28.985306 | orchestrator | 2025-06-01 23:42:28.985538 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 23:42:28.987080 | orchestrator | Sunday 01 June 2025 23:42:28 +0000 (0:00:00.159) 0:00:49.593 *********** 2025-06-01 23:42:29.135119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:29.135751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:29.137030 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:29.137719 | orchestrator | 2025-06-01 23:42:29.138376 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 23:42:29.139198 | orchestrator | Sunday 01 June 2025 23:42:29 +0000 (0:00:00.152) 0:00:49.745 *********** 2025-06-01 23:42:29.288956 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'})  2025-06-01 23:42:29.289171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'})  2025-06-01 23:42:29.290714 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:42:29.291533 | orchestrator | 2025-06-01 23:42:29.292234 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 23:42:29.292789 | orchestrator | Sunday 01 June 2025 23:42:29 +0000 (0:00:00.151) 0:00:49.897 *********** 2025-06-01 23:42:29.777341 | orchestrator | ok: [testbed-node-4] => { 2025-06-01 23:42:29.777477 | orchestrator |  "lvm_report": { 2025-06-01 23:42:29.778245 | orchestrator |  "lv": [ 2025-06-01 23:42:29.779044 | orchestrator |  { 2025-06-01 23:42:29.780632 | orchestrator |  "lv_name": "osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af", 2025-06-01 23:42:29.782287 | orchestrator |  "vg_name": "ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af" 2025-06-01 23:42:29.782832 | orchestrator |  }, 2025-06-01 23:42:29.783648 | orchestrator |  { 2025-06-01 23:42:29.784133 | orchestrator |  "lv_name": "osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1", 2025-06-01 23:42:29.784630 | orchestrator |  "vg_name": "ceph-e43a5796-5555-5d7b-8188-8712d414b3d1" 2025-06-01 23:42:29.785778 | orchestrator |  } 2025-06-01 23:42:29.785969 | orchestrator |  ], 2025-06-01 23:42:29.786407 | orchestrator |  "pv": [ 2025-06-01 23:42:29.787056 | orchestrator |  { 2025-06-01 23:42:29.787745 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 23:42:29.788714 | orchestrator |  "vg_name": "ceph-e43a5796-5555-5d7b-8188-8712d414b3d1" 2025-06-01 23:42:29.789276 | orchestrator |  }, 2025-06-01 23:42:29.789754 | orchestrator |  { 2025-06-01 23:42:29.790410 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 23:42:29.791038 | orchestrator |  "vg_name": "ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af" 2025-06-01 23:42:29.791419 | orchestrator |  } 2025-06-01 23:42:29.792485 | orchestrator |  ] 2025-06-01 23:42:29.794068 | orchestrator |  } 2025-06-01 23:42:29.794219 | orchestrator | } 2025-06-01 23:42:29.794964 | orchestrator | 2025-06-01 23:42:29.795754 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-01 23:42:29.797118 | orchestrator | 2025-06-01 23:42:29.797867 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-01 23:42:29.798752 | orchestrator | Sunday 01 June 2025 23:42:29 +0000 (0:00:00.490) 0:00:50.388 *********** 2025-06-01 23:42:30.020125 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-01 23:42:30.020229 | orchestrator | 2025-06-01 23:42:30.021459 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-01 23:42:30.021938 | orchestrator | Sunday 01 June 2025 23:42:30 +0000 (0:00:00.242) 0:00:50.630 *********** 2025-06-01 23:42:30.247807 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:30.247890 | orchestrator | 2025-06-01 23:42:30.248183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:30.250825 | orchestrator | Sunday 01 June 2025 23:42:30 +0000 (0:00:00.226) 0:00:50.857 *********** 2025-06-01 23:42:30.653859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-01 23:42:30.654638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-01 23:42:30.655875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-01 23:42:30.656594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-01 23:42:30.658529 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-01 23:42:30.660116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-01 23:42:30.660613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-01 23:42:30.661626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-01 23:42:30.662000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-01 23:42:30.662347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-01 23:42:30.663081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-01 23:42:30.664193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-01 23:42:30.664571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-01 23:42:30.665334 | orchestrator | 2025-06-01 23:42:30.666092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:30.666983 | orchestrator | Sunday 01 June 2025 23:42:30 +0000 (0:00:00.406) 0:00:51.263 *********** 2025-06-01 23:42:30.860439 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:30.860543 | orchestrator | 2025-06-01 23:42:30.861210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:30.861802 | orchestrator | Sunday 01 June 2025 23:42:30 +0000 (0:00:00.206) 0:00:51.470 *********** 2025-06-01 23:42:31.038591 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:31.038765 | orchestrator | 2025-06-01 23:42:31.039464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:31.040092 | orchestrator | Sunday 01 June 2025 23:42:31 +0000 (0:00:00.178) 0:00:51.649 *********** 2025-06-01 23:42:31.237639 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:31.237891 | orchestrator | 2025-06-01 23:42:31.239415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:31.240807 | orchestrator | Sunday 01 June 2025 23:42:31 +0000 (0:00:00.198) 0:00:51.848 *********** 2025-06-01 23:42:31.435181 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:31.435860 | orchestrator | 2025-06-01 23:42:31.436698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:31.438525 | orchestrator | Sunday 01 June 2025 23:42:31 +0000 (0:00:00.196) 0:00:52.045 *********** 2025-06-01 23:42:31.635039 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:31.635179 | orchestrator | 2025-06-01 23:42:31.635725 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:31.636556 | orchestrator | Sunday 01 June 2025 23:42:31 +0000 (0:00:00.199) 0:00:52.244 *********** 2025-06-01 23:42:32.266787 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:32.267078 | orchestrator | 2025-06-01 23:42:32.268357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:32.269436 | orchestrator | Sunday 01 June 2025 23:42:32 +0000 (0:00:00.632) 0:00:52.877 *********** 2025-06-01 23:42:32.502366 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:32.502527 | orchestrator | 2025-06-01 23:42:32.503465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:32.505299 | orchestrator | Sunday 01 June 2025 23:42:32 +0000 (0:00:00.234) 0:00:53.112 *********** 2025-06-01 23:42:32.703404 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:32.704409 | orchestrator | 2025-06-01 23:42:32.705110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:32.705780 | orchestrator | Sunday 01 June 2025 23:42:32 +0000 (0:00:00.202) 0:00:53.314 *********** 2025-06-01 23:42:33.141728 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce) 2025-06-01 23:42:33.142621 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce) 2025-06-01 23:42:33.144358 | orchestrator | 2025-06-01 23:42:33.145393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:33.146162 | orchestrator | Sunday 01 June 2025 23:42:33 +0000 (0:00:00.438) 0:00:53.752 *********** 2025-06-01 23:42:33.591530 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322) 2025-06-01 23:42:33.591956 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322) 2025-06-01 23:42:33.593182 | orchestrator | 2025-06-01 23:42:33.594785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:33.594809 | orchestrator | Sunday 01 June 2025 23:42:33 +0000 (0:00:00.448) 0:00:54.201 *********** 2025-06-01 23:42:34.034134 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5) 2025-06-01 23:42:34.034335 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5) 2025-06-01 23:42:34.035158 | orchestrator | 2025-06-01 23:42:34.036164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:34.036996 | orchestrator | Sunday 01 June 2025 23:42:34 +0000 (0:00:00.442) 0:00:54.644 *********** 2025-06-01 23:42:34.470586 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f) 2025-06-01 23:42:34.471166 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f) 2025-06-01 23:42:34.472502 | orchestrator | 2025-06-01 23:42:34.474723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-01 23:42:34.475292 | orchestrator | Sunday 01 June 2025 23:42:34 +0000 (0:00:00.435) 0:00:55.080 *********** 2025-06-01 23:42:34.823286 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-01 23:42:34.824164 | orchestrator | 2025-06-01 23:42:34.824563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:34.825474 | orchestrator | Sunday 01 June 2025 23:42:34 +0000 (0:00:00.353) 0:00:55.433 *********** 2025-06-01 23:42:35.242934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-01 23:42:35.243399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-01 23:42:35.245006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-01 23:42:35.246792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-01 23:42:35.246821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-01 23:42:35.247847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-01 23:42:35.248484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-01 23:42:35.249284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-01 23:42:35.249809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-01 23:42:35.250399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-01 23:42:35.251150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-01 23:42:35.251838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-01 23:42:35.252841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-01 23:42:35.253056 | orchestrator | 2025-06-01 23:42:35.253809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:35.254590 | orchestrator | Sunday 01 June 2025 23:42:35 +0000 (0:00:00.419) 0:00:55.853 *********** 2025-06-01 23:42:35.442332 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:35.442435 | orchestrator | 2025-06-01 23:42:35.443171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:35.444048 | orchestrator | Sunday 01 June 2025 23:42:35 +0000 (0:00:00.198) 0:00:56.052 *********** 2025-06-01 23:42:35.657606 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:35.657782 | orchestrator | 2025-06-01 23:42:35.658542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:35.659722 | orchestrator | Sunday 01 June 2025 23:42:35 +0000 (0:00:00.214) 0:00:56.267 *********** 2025-06-01 23:42:36.300973 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:36.301197 | orchestrator | 2025-06-01 23:42:36.302431 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:36.303362 | orchestrator | Sunday 01 June 2025 23:42:36 +0000 (0:00:00.644) 0:00:56.911 *********** 2025-06-01 23:42:36.509147 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:36.510464 | orchestrator | 2025-06-01 23:42:36.512027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:36.512473 | orchestrator | Sunday 01 June 2025 23:42:36 +0000 (0:00:00.208) 0:00:57.119 *********** 2025-06-01 23:42:36.712125 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:36.712241 | orchestrator | 2025-06-01 23:42:36.712256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:36.713317 | orchestrator | Sunday 01 June 2025 23:42:36 +0000 (0:00:00.203) 0:00:57.322 *********** 2025-06-01 23:42:36.929321 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:36.930076 | orchestrator | 2025-06-01 23:42:36.930222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:36.931352 | orchestrator | Sunday 01 June 2025 23:42:36 +0000 (0:00:00.217) 0:00:57.540 *********** 2025-06-01 23:42:37.125893 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:37.127123 | orchestrator | 2025-06-01 23:42:37.129339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:37.130668 | orchestrator | Sunday 01 June 2025 23:42:37 +0000 (0:00:00.195) 0:00:57.735 *********** 2025-06-01 23:42:37.330669 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:37.331795 | orchestrator | 2025-06-01 23:42:37.332621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:37.333621 | orchestrator | Sunday 01 June 2025 23:42:37 +0000 (0:00:00.206) 0:00:57.941 *********** 2025-06-01 23:42:37.961824 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-01 23:42:37.962539 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-01 23:42:37.963507 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-01 23:42:37.964347 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-01 23:42:37.965223 | orchestrator | 2025-06-01 23:42:37.965591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:37.966453 | orchestrator | Sunday 01 June 2025 23:42:37 +0000 (0:00:00.627) 0:00:58.569 *********** 2025-06-01 23:42:38.157443 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:38.158340 | orchestrator | 2025-06-01 23:42:38.158749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:38.159580 | orchestrator | Sunday 01 June 2025 23:42:38 +0000 (0:00:00.199) 0:00:58.768 *********** 2025-06-01 23:42:38.362129 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:38.363314 | orchestrator | 2025-06-01 23:42:38.363734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:38.365471 | orchestrator | Sunday 01 June 2025 23:42:38 +0000 (0:00:00.204) 0:00:58.972 *********** 2025-06-01 23:42:38.568778 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:38.569554 | orchestrator | 2025-06-01 23:42:38.573511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-01 23:42:38.573961 | orchestrator | Sunday 01 June 2025 23:42:38 +0000 (0:00:00.206) 0:00:59.178 *********** 2025-06-01 23:42:38.771880 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:38.772993 | orchestrator | 2025-06-01 23:42:38.774380 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-01 23:42:38.777053 | orchestrator | Sunday 01 June 2025 23:42:38 +0000 (0:00:00.203) 0:00:59.382 *********** 2025-06-01 23:42:39.116653 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:39.117460 | orchestrator | 2025-06-01 23:42:39.118606 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-01 23:42:39.119613 | orchestrator | Sunday 01 June 2025 23:42:39 +0000 (0:00:00.344) 0:00:59.727 *********** 2025-06-01 23:42:39.307995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '94e6c78b-35f7-5cb8-865b-5befb7b6694e'}}) 2025-06-01 23:42:39.309266 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0de39833-f6ff-5bf1-9ca3-735e32822edb'}}) 2025-06-01 23:42:39.310248 | orchestrator | 2025-06-01 23:42:39.311575 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-01 23:42:39.312405 | orchestrator | Sunday 01 June 2025 23:42:39 +0000 (0:00:00.191) 0:00:59.918 *********** 2025-06-01 23:42:41.375647 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'}) 2025-06-01 23:42:41.376600 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'}) 2025-06-01 23:42:41.378076 | orchestrator | 2025-06-01 23:42:41.378717 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-01 23:42:41.380222 | orchestrator | Sunday 01 June 2025 23:42:41 +0000 (0:00:02.064) 0:01:01.983 *********** 2025-06-01 23:42:41.551473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:41.552366 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:41.553399 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:41.555236 | orchestrator | 2025-06-01 23:42:41.556042 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-01 23:42:41.557263 | orchestrator | Sunday 01 June 2025 23:42:41 +0000 (0:00:00.178) 0:01:02.162 *********** 2025-06-01 23:42:42.810895 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'}) 2025-06-01 23:42:42.811119 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'}) 2025-06-01 23:42:42.811138 | orchestrator | 2025-06-01 23:42:42.811230 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-01 23:42:42.811247 | orchestrator | Sunday 01 June 2025 23:42:42 +0000 (0:00:01.258) 0:01:03.420 *********** 2025-06-01 23:42:42.968851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:42.969091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:42.970110 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:42.972270 | orchestrator | 2025-06-01 23:42:42.972372 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-01 23:42:42.973707 | orchestrator | Sunday 01 June 2025 23:42:42 +0000 (0:00:00.158) 0:01:03.579 *********** 2025-06-01 23:42:43.105636 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:43.106115 | orchestrator | 2025-06-01 23:42:43.106968 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-01 23:42:43.108210 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.137) 0:01:03.716 *********** 2025-06-01 23:42:43.257266 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:43.257468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:43.257719 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:43.258997 | orchestrator | 2025-06-01 23:42:43.259452 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-01 23:42:43.259754 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.153) 0:01:03.869 *********** 2025-06-01 23:42:43.409699 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:43.411428 | orchestrator | 2025-06-01 23:42:43.415264 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-01 23:42:43.415300 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.150) 0:01:04.020 *********** 2025-06-01 23:42:43.567160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:43.567714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:43.569201 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:43.571236 | orchestrator | 2025-06-01 23:42:43.572263 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-01 23:42:43.574365 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.156) 0:01:04.176 *********** 2025-06-01 23:42:43.704251 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:43.705350 | orchestrator | 2025-06-01 23:42:43.706294 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-01 23:42:43.707009 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.138) 0:01:04.315 *********** 2025-06-01 23:42:43.863139 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:43.863826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:43.864858 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:43.865861 | orchestrator | 2025-06-01 23:42:43.866804 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-01 23:42:43.867320 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.156) 0:01:04.471 *********** 2025-06-01 23:42:44.004145 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:44.004574 | orchestrator | 2025-06-01 23:42:44.006152 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-01 23:42:44.006557 | orchestrator | Sunday 01 June 2025 23:42:43 +0000 (0:00:00.142) 0:01:04.614 *********** 2025-06-01 23:42:44.373481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:44.374185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:44.375393 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:44.376150 | orchestrator | 2025-06-01 23:42:44.377617 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-01 23:42:44.377996 | orchestrator | Sunday 01 June 2025 23:42:44 +0000 (0:00:00.369) 0:01:04.984 *********** 2025-06-01 23:42:44.532258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:44.532380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:44.532397 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:44.533061 | orchestrator | 2025-06-01 23:42:44.533867 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-01 23:42:44.534481 | orchestrator | Sunday 01 June 2025 23:42:44 +0000 (0:00:00.154) 0:01:05.138 *********** 2025-06-01 23:42:44.690706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:44.690874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:44.691016 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:44.693409 | orchestrator | 2025-06-01 23:42:44.694162 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-01 23:42:44.694744 | orchestrator | Sunday 01 June 2025 23:42:44 +0000 (0:00:00.163) 0:01:05.302 *********** 2025-06-01 23:42:44.826249 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:44.826798 | orchestrator | 2025-06-01 23:42:44.828456 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-01 23:42:44.830010 | orchestrator | Sunday 01 June 2025 23:42:44 +0000 (0:00:00.134) 0:01:05.437 *********** 2025-06-01 23:42:44.965084 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:44.965175 | orchestrator | 2025-06-01 23:42:44.966194 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-01 23:42:44.967045 | orchestrator | Sunday 01 June 2025 23:42:44 +0000 (0:00:00.138) 0:01:05.575 *********** 2025-06-01 23:42:45.096203 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:45.097335 | orchestrator | 2025-06-01 23:42:45.100555 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-01 23:42:45.100589 | orchestrator | Sunday 01 June 2025 23:42:45 +0000 (0:00:00.131) 0:01:05.707 *********** 2025-06-01 23:42:45.262867 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 23:42:45.263483 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-01 23:42:45.264168 | orchestrator | } 2025-06-01 23:42:45.264673 | orchestrator | 2025-06-01 23:42:45.265382 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-01 23:42:45.267238 | orchestrator | Sunday 01 June 2025 23:42:45 +0000 (0:00:00.166) 0:01:05.873 *********** 2025-06-01 23:42:45.409237 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 23:42:45.409623 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-01 23:42:45.410752 | orchestrator | } 2025-06-01 23:42:45.411513 | orchestrator | 2025-06-01 23:42:45.412225 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-01 23:42:45.413934 | orchestrator | Sunday 01 June 2025 23:42:45 +0000 (0:00:00.146) 0:01:06.020 *********** 2025-06-01 23:42:45.556404 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 23:42:45.557505 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-01 23:42:45.560250 | orchestrator | } 2025-06-01 23:42:45.560286 | orchestrator | 2025-06-01 23:42:45.561137 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-01 23:42:45.561851 | orchestrator | Sunday 01 June 2025 23:42:45 +0000 (0:00:00.145) 0:01:06.165 *********** 2025-06-01 23:42:46.079147 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:46.080735 | orchestrator | 2025-06-01 23:42:46.081086 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-01 23:42:46.081448 | orchestrator | Sunday 01 June 2025 23:42:46 +0000 (0:00:00.521) 0:01:06.687 *********** 2025-06-01 23:42:46.591374 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:46.591569 | orchestrator | 2025-06-01 23:42:46.592365 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-01 23:42:46.593123 | orchestrator | Sunday 01 June 2025 23:42:46 +0000 (0:00:00.513) 0:01:07.201 *********** 2025-06-01 23:42:47.100474 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:47.257858 | orchestrator | 2025-06-01 23:42:47.258001 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-01 23:42:47.258074 | orchestrator | Sunday 01 June 2025 23:42:47 +0000 (0:00:00.508) 0:01:07.710 *********** 2025-06-01 23:42:47.478241 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:47.478396 | orchestrator | 2025-06-01 23:42:47.479053 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-01 23:42:47.479644 | orchestrator | Sunday 01 June 2025 23:42:47 +0000 (0:00:00.378) 0:01:08.089 *********** 2025-06-01 23:42:47.600414 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:47.600994 | orchestrator | 2025-06-01 23:42:47.602470 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-01 23:42:47.603320 | orchestrator | Sunday 01 June 2025 23:42:47 +0000 (0:00:00.119) 0:01:08.209 *********** 2025-06-01 23:42:47.719699 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:47.720225 | orchestrator | 2025-06-01 23:42:47.721680 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-01 23:42:47.723100 | orchestrator | Sunday 01 June 2025 23:42:47 +0000 (0:00:00.120) 0:01:08.329 *********** 2025-06-01 23:42:47.859161 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 23:42:47.859796 | orchestrator |  "vgs_report": { 2025-06-01 23:42:47.861357 | orchestrator |  "vg": [] 2025-06-01 23:42:47.862320 | orchestrator |  } 2025-06-01 23:42:47.863095 | orchestrator | } 2025-06-01 23:42:47.863691 | orchestrator | 2025-06-01 23:42:47.864715 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-01 23:42:47.865115 | orchestrator | Sunday 01 June 2025 23:42:47 +0000 (0:00:00.138) 0:01:08.468 *********** 2025-06-01 23:42:47.992113 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:47.992923 | orchestrator | 2025-06-01 23:42:47.993683 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-01 23:42:47.994490 | orchestrator | Sunday 01 June 2025 23:42:47 +0000 (0:00:00.134) 0:01:08.603 *********** 2025-06-01 23:42:48.126815 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:48.127793 | orchestrator | 2025-06-01 23:42:48.128525 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-01 23:42:48.129499 | orchestrator | Sunday 01 June 2025 23:42:48 +0000 (0:00:00.134) 0:01:08.737 *********** 2025-06-01 23:42:48.266720 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:48.267279 | orchestrator | 2025-06-01 23:42:48.268181 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-01 23:42:48.268674 | orchestrator | Sunday 01 June 2025 23:42:48 +0000 (0:00:00.140) 0:01:08.878 *********** 2025-06-01 23:42:48.423612 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:48.423695 | orchestrator | 2025-06-01 23:42:48.424135 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-01 23:42:48.424698 | orchestrator | Sunday 01 June 2025 23:42:48 +0000 (0:00:00.156) 0:01:09.034 *********** 2025-06-01 23:42:48.560452 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:48.561168 | orchestrator | 2025-06-01 23:42:48.561739 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-01 23:42:48.562603 | orchestrator | Sunday 01 June 2025 23:42:48 +0000 (0:00:00.136) 0:01:09.171 *********** 2025-06-01 23:42:48.710640 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:48.710815 | orchestrator | 2025-06-01 23:42:48.711093 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-01 23:42:48.711668 | orchestrator | Sunday 01 June 2025 23:42:48 +0000 (0:00:00.150) 0:01:09.321 *********** 2025-06-01 23:42:48.865268 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:48.865462 | orchestrator | 2025-06-01 23:42:48.866464 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-01 23:42:48.867032 | orchestrator | Sunday 01 June 2025 23:42:48 +0000 (0:00:00.154) 0:01:09.476 *********** 2025-06-01 23:42:49.015179 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:49.015387 | orchestrator | 2025-06-01 23:42:49.016035 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-01 23:42:49.016455 | orchestrator | Sunday 01 June 2025 23:42:49 +0000 (0:00:00.150) 0:01:09.626 *********** 2025-06-01 23:42:49.391475 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:49.391869 | orchestrator | 2025-06-01 23:42:49.392683 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-01 23:42:49.393520 | orchestrator | Sunday 01 June 2025 23:42:49 +0000 (0:00:00.376) 0:01:10.003 *********** 2025-06-01 23:42:49.527714 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:49.528370 | orchestrator | 2025-06-01 23:42:49.529145 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-01 23:42:49.529861 | orchestrator | Sunday 01 June 2025 23:42:49 +0000 (0:00:00.135) 0:01:10.139 *********** 2025-06-01 23:42:49.663156 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:49.663355 | orchestrator | 2025-06-01 23:42:49.664543 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-01 23:42:49.665736 | orchestrator | Sunday 01 June 2025 23:42:49 +0000 (0:00:00.133) 0:01:10.272 *********** 2025-06-01 23:42:49.806515 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:49.806650 | orchestrator | 2025-06-01 23:42:49.807628 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-01 23:42:49.808644 | orchestrator | Sunday 01 June 2025 23:42:49 +0000 (0:00:00.144) 0:01:10.416 *********** 2025-06-01 23:42:49.941639 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:49.942644 | orchestrator | 2025-06-01 23:42:49.943682 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-01 23:42:49.945008 | orchestrator | Sunday 01 June 2025 23:42:49 +0000 (0:00:00.135) 0:01:10.552 *********** 2025-06-01 23:42:50.091670 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:50.092157 | orchestrator | 2025-06-01 23:42:50.093587 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-01 23:42:50.094647 | orchestrator | Sunday 01 June 2025 23:42:50 +0000 (0:00:00.150) 0:01:10.702 *********** 2025-06-01 23:42:50.257315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:50.257586 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:50.257777 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:50.259674 | orchestrator | 2025-06-01 23:42:50.260089 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-01 23:42:50.262984 | orchestrator | Sunday 01 June 2025 23:42:50 +0000 (0:00:00.164) 0:01:10.867 *********** 2025-06-01 23:42:50.408557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:50.409710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:50.410607 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:50.412270 | orchestrator | 2025-06-01 23:42:50.413348 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-01 23:42:50.413793 | orchestrator | Sunday 01 June 2025 23:42:50 +0000 (0:00:00.151) 0:01:11.018 *********** 2025-06-01 23:42:50.566091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:50.569152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:50.572178 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:50.573469 | orchestrator | 2025-06-01 23:42:50.573499 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-01 23:42:50.573513 | orchestrator | Sunday 01 June 2025 23:42:50 +0000 (0:00:00.153) 0:01:11.171 *********** 2025-06-01 23:42:50.720254 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:50.720549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:50.721183 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:50.722657 | orchestrator | 2025-06-01 23:42:50.723294 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-01 23:42:50.724100 | orchestrator | Sunday 01 June 2025 23:42:50 +0000 (0:00:00.159) 0:01:11.330 *********** 2025-06-01 23:42:50.878487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:50.878645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:50.880030 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:50.882301 | orchestrator | 2025-06-01 23:42:50.882846 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-01 23:42:50.883763 | orchestrator | Sunday 01 June 2025 23:42:50 +0000 (0:00:00.157) 0:01:11.488 *********** 2025-06-01 23:42:51.032148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:51.032412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:51.032525 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:51.033541 | orchestrator | 2025-06-01 23:42:51.034577 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-01 23:42:51.035719 | orchestrator | Sunday 01 June 2025 23:42:51 +0000 (0:00:00.153) 0:01:11.642 *********** 2025-06-01 23:42:51.394435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:51.395812 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:51.398593 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:51.402148 | orchestrator | 2025-06-01 23:42:51.403283 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-01 23:42:51.403617 | orchestrator | Sunday 01 June 2025 23:42:51 +0000 (0:00:00.362) 0:01:12.004 *********** 2025-06-01 23:42:51.572600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:51.575081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:51.575858 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:51.576570 | orchestrator | 2025-06-01 23:42:51.577466 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-01 23:42:51.579497 | orchestrator | Sunday 01 June 2025 23:42:51 +0000 (0:00:00.179) 0:01:12.184 *********** 2025-06-01 23:42:52.071030 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:52.071746 | orchestrator | 2025-06-01 23:42:52.072007 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-01 23:42:52.073466 | orchestrator | Sunday 01 June 2025 23:42:52 +0000 (0:00:00.496) 0:01:12.680 *********** 2025-06-01 23:42:52.551121 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:52.552151 | orchestrator | 2025-06-01 23:42:52.553311 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-01 23:42:52.554973 | orchestrator | Sunday 01 June 2025 23:42:52 +0000 (0:00:00.481) 0:01:13.162 *********** 2025-06-01 23:42:52.726344 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:42:52.726497 | orchestrator | 2025-06-01 23:42:52.727294 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-01 23:42:52.728220 | orchestrator | Sunday 01 June 2025 23:42:52 +0000 (0:00:00.174) 0:01:13.337 *********** 2025-06-01 23:42:52.887494 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'vg_name': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'}) 2025-06-01 23:42:52.888524 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'vg_name': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'}) 2025-06-01 23:42:52.889204 | orchestrator | 2025-06-01 23:42:52.890073 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-01 23:42:52.890713 | orchestrator | Sunday 01 June 2025 23:42:52 +0000 (0:00:00.161) 0:01:13.499 *********** 2025-06-01 23:42:53.047508 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:53.048233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:53.049394 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:53.050256 | orchestrator | 2025-06-01 23:42:53.052319 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-01 23:42:53.053525 | orchestrator | Sunday 01 June 2025 23:42:53 +0000 (0:00:00.159) 0:01:13.658 *********** 2025-06-01 23:42:53.201363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:53.202513 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:53.203777 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:53.205966 | orchestrator | 2025-06-01 23:42:53.206000 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-01 23:42:53.207131 | orchestrator | Sunday 01 June 2025 23:42:53 +0000 (0:00:00.154) 0:01:13.812 *********** 2025-06-01 23:42:53.358205 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'})  2025-06-01 23:42:53.360746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'})  2025-06-01 23:42:53.362136 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:42:53.363605 | orchestrator | 2025-06-01 23:42:53.364776 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-01 23:42:53.366221 | orchestrator | Sunday 01 June 2025 23:42:53 +0000 (0:00:00.156) 0:01:13.968 *********** 2025-06-01 23:42:53.521061 | orchestrator | ok: [testbed-node-5] => { 2025-06-01 23:42:53.521686 | orchestrator |  "lvm_report": { 2025-06-01 23:42:53.523708 | orchestrator |  "lv": [ 2025-06-01 23:42:53.525083 | orchestrator |  { 2025-06-01 23:42:53.526520 | orchestrator |  "lv_name": "osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb", 2025-06-01 23:42:53.527771 | orchestrator |  "vg_name": "ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb" 2025-06-01 23:42:53.529216 | orchestrator |  }, 2025-06-01 23:42:53.530303 | orchestrator |  { 2025-06-01 23:42:53.531124 | orchestrator |  "lv_name": "osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e", 2025-06-01 23:42:53.532398 | orchestrator |  "vg_name": "ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e" 2025-06-01 23:42:53.533098 | orchestrator |  } 2025-06-01 23:42:53.534194 | orchestrator |  ], 2025-06-01 23:42:53.534863 | orchestrator |  "pv": [ 2025-06-01 23:42:53.536101 | orchestrator |  { 2025-06-01 23:42:53.537110 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-01 23:42:53.538297 | orchestrator |  "vg_name": "ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e" 2025-06-01 23:42:53.539487 | orchestrator |  }, 2025-06-01 23:42:53.540999 | orchestrator |  { 2025-06-01 23:42:53.541734 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-01 23:42:53.542939 | orchestrator |  "vg_name": "ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb" 2025-06-01 23:42:53.544734 | orchestrator |  } 2025-06-01 23:42:53.545152 | orchestrator |  ] 2025-06-01 23:42:53.546670 | orchestrator |  } 2025-06-01 23:42:53.548319 | orchestrator | } 2025-06-01 23:42:53.549226 | orchestrator | 2025-06-01 23:42:53.550524 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:42:53.550568 | orchestrator | 2025-06-01 23:42:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 23:42:53.550582 | orchestrator | 2025-06-01 23:42:53 | INFO  | Please wait and do not abort execution. 2025-06-01 23:42:53.550902 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 23:42:53.551656 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 23:42:53.552717 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-01 23:42:53.553471 | orchestrator | 2025-06-01 23:42:53.554315 | orchestrator | 2025-06-01 23:42:53.555037 | orchestrator | 2025-06-01 23:42:53.556041 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:42:53.556418 | orchestrator | Sunday 01 June 2025 23:42:53 +0000 (0:00:00.161) 0:01:14.130 *********** 2025-06-01 23:42:53.556894 | orchestrator | =============================================================================== 2025-06-01 23:42:53.557413 | orchestrator | Create block VGs -------------------------------------------------------- 6.45s 2025-06-01 23:42:53.557828 | orchestrator | Create block LVs -------------------------------------------------------- 4.03s 2025-06-01 23:42:53.558420 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.96s 2025-06-01 23:42:53.558970 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-06-01 23:42:53.560007 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2025-06-01 23:42:53.561203 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2025-06-01 23:42:53.563027 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-06-01 23:42:53.564232 | orchestrator | Add known partitions to the list of available block devices ------------- 1.49s 2025-06-01 23:42:53.565143 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-06-01 23:42:53.565451 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-06-01 23:42:53.565894 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2025-06-01 23:42:53.566589 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-06-01 23:42:53.567904 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-06-01 23:42:53.569415 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-06-01 23:42:53.570877 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-06-01 23:42:53.571943 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.71s 2025-06-01 23:42:53.572873 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-06-01 23:42:53.573740 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2025-06-01 23:42:53.574595 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.69s 2025-06-01 23:42:53.575364 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.68s 2025-06-01 23:42:55.935003 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:42:55.935177 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:42:55.935192 | orchestrator | Registering Redlock._release_script 2025-06-01 23:42:55.997903 | orchestrator | 2025-06-01 23:42:55 | INFO  | Task 508b2e2f-f91b-4751-ab8c-36c76cf98868 (facts) was prepared for execution. 2025-06-01 23:42:55.998069 | orchestrator | 2025-06-01 23:42:55 | INFO  | It takes a moment until task 508b2e2f-f91b-4751-ab8c-36c76cf98868 (facts) has been started and output is visible here. 2025-06-01 23:43:00.192446 | orchestrator | 2025-06-01 23:43:00.195905 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-01 23:43:00.196001 | orchestrator | 2025-06-01 23:43:00.196015 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-01 23:43:00.198002 | orchestrator | Sunday 01 June 2025 23:43:00 +0000 (0:00:00.264) 0:00:00.264 *********** 2025-06-01 23:43:01.737440 | orchestrator | ok: [testbed-manager] 2025-06-01 23:43:01.739179 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:43:01.740203 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:43:01.741381 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:43:01.742483 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:43:01.743168 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:43:01.743972 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:43:01.744708 | orchestrator | 2025-06-01 23:43:01.745513 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-01 23:43:01.746309 | orchestrator | Sunday 01 June 2025 23:43:01 +0000 (0:00:01.543) 0:00:01.808 *********** 2025-06-01 23:43:01.900577 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:43:01.999525 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:43:02.095495 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:43:02.176371 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:43:02.256494 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:43:02.974007 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:43:02.975623 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:43:02.977132 | orchestrator | 2025-06-01 23:43:02.978234 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-01 23:43:02.979668 | orchestrator | 2025-06-01 23:43:02.980895 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-01 23:43:02.981996 | orchestrator | Sunday 01 June 2025 23:43:02 +0000 (0:00:01.240) 0:00:03.048 *********** 2025-06-01 23:43:07.715283 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:43:07.715542 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:43:07.716523 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:43:07.719501 | orchestrator | ok: [testbed-manager] 2025-06-01 23:43:07.719528 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:43:07.719540 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:43:07.720353 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:43:07.721118 | orchestrator | 2025-06-01 23:43:07.722408 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-01 23:43:07.723812 | orchestrator | 2025-06-01 23:43:07.725868 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-01 23:43:07.726552 | orchestrator | Sunday 01 June 2025 23:43:07 +0000 (0:00:04.740) 0:00:07.789 *********** 2025-06-01 23:43:07.864401 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:43:07.959348 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:43:08.039879 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:43:08.119558 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:43:08.196645 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:43:08.239007 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:43:08.240056 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:43:08.240669 | orchestrator | 2025-06-01 23:43:08.241877 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:43:08.242208 | orchestrator | 2025-06-01 23:43:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-01 23:43:08.242525 | orchestrator | 2025-06-01 23:43:08 | INFO  | Please wait and do not abort execution. 2025-06-01 23:43:08.243073 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.243897 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.244865 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.245967 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.246779 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.247069 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.247460 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:43:08.247923 | orchestrator | 2025-06-01 23:43:08.248579 | orchestrator | 2025-06-01 23:43:08.248920 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:43:08.249425 | orchestrator | Sunday 01 June 2025 23:43:08 +0000 (0:00:00.525) 0:00:08.314 *********** 2025-06-01 23:43:08.249836 | orchestrator | =============================================================================== 2025-06-01 23:43:08.250411 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.74s 2025-06-01 23:43:08.250749 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.54s 2025-06-01 23:43:08.251204 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-06-01 23:43:08.251628 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-01 23:43:08.879147 | orchestrator | 2025-06-01 23:43:08.880574 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 1 23:43:08 UTC 2025 2025-06-01 23:43:08.880632 | orchestrator | 2025-06-01 23:43:10.570845 | orchestrator | 2025-06-01 23:43:10 | INFO  | Collection nutshell is prepared for execution 2025-06-01 23:43:10.571704 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [0] - dotfiles 2025-06-01 23:43:10.576587 | orchestrator | Registering Redlock._acquired_script 2025-06-01 23:43:10.576713 | orchestrator | Registering Redlock._extend_script 2025-06-01 23:43:10.576729 | orchestrator | Registering Redlock._release_script 2025-06-01 23:43:10.582132 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [0] - homer 2025-06-01 23:43:10.582175 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [0] - netdata 2025-06-01 23:43:10.582188 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [0] - openstackclient 2025-06-01 23:43:10.582292 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [0] - phpmyadmin 2025-06-01 23:43:10.582309 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [0] - common 2025-06-01 23:43:10.585270 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [1] -- loadbalancer 2025-06-01 23:43:10.585292 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [2] --- opensearch 2025-06-01 23:43:10.585350 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [2] --- mariadb-ng 2025-06-01 23:43:10.585365 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [3] ---- horizon 2025-06-01 23:43:10.587380 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [3] ---- keystone 2025-06-01 23:43:10.587413 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [4] ----- neutron 2025-06-01 23:43:10.587568 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [5] ------ wait-for-nova 2025-06-01 23:43:10.587587 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [5] ------ octavia 2025-06-01 23:43:10.587599 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [4] ----- barbican 2025-06-01 23:43:10.587610 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [4] ----- designate 2025-06-01 23:43:10.587621 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [4] ----- ironic 2025-06-01 23:43:10.587633 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [4] ----- placement 2025-06-01 23:43:10.587644 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [4] ----- magnum 2025-06-01 23:43:10.587655 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [1] -- openvswitch 2025-06-01 23:43:10.588208 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [2] --- ovn 2025-06-01 23:43:10.588228 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [1] -- memcached 2025-06-01 23:43:10.588366 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [1] -- redis 2025-06-01 23:43:10.588383 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [1] -- rabbitmq-ng 2025-06-01 23:43:10.589194 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [0] - kubernetes 2025-06-01 23:43:10.591137 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [1] -- kubeconfig 2025-06-01 23:43:10.591171 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [1] -- copy-kubeconfig 2025-06-01 23:43:10.591269 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [0] - ceph 2025-06-01 23:43:10.593663 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [1] -- ceph-pools 2025-06-01 23:43:10.593705 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [2] --- copy-ceph-keys 2025-06-01 23:43:10.593819 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [3] ---- cephclient 2025-06-01 23:43:10.594495 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-01 23:43:10.594521 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [4] ----- wait-for-keystone 2025-06-01 23:43:10.594534 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-01 23:43:10.594640 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [5] ------ glance 2025-06-01 23:43:10.594687 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [5] ------ cinder 2025-06-01 23:43:10.594701 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [5] ------ nova 2025-06-01 23:43:10.594713 | orchestrator | 2025-06-01 23:43:10 | INFO  | A [4] ----- prometheus 2025-06-01 23:43:10.594726 | orchestrator | 2025-06-01 23:43:10 | INFO  | D [5] ------ grafana 2025-06-01 23:43:10.787317 | orchestrator | 2025-06-01 23:43:10 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-01 23:43:10.787514 | orchestrator | 2025-06-01 23:43:10 | INFO  | Tasks are running in the background 2025-06-01 23:43:13.449742 | orchestrator | 2025-06-01 23:43:13 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-01 23:43:15.598508 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:15.602241 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:15.605266 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:15.606106 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:15.608033 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:15.612308 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:15.613026 | orchestrator | 2025-06-01 23:43:15 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:15.613129 | orchestrator | 2025-06-01 23:43:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:18.650873 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:18.651060 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:18.651418 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:18.652468 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:18.655496 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:18.658597 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:18.659085 | orchestrator | 2025-06-01 23:43:18 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:18.659108 | orchestrator | 2025-06-01 23:43:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:21.697564 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:21.701407 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:21.701458 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:21.701471 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:21.701483 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:21.704223 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:21.704281 | orchestrator | 2025-06-01 23:43:21 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:21.704294 | orchestrator | 2025-06-01 23:43:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:24.765670 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:24.765881 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:24.771528 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:24.772025 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:24.772583 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:24.773124 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:24.773639 | orchestrator | 2025-06-01 23:43:24 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:24.773661 | orchestrator | 2025-06-01 23:43:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:27.815663 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:27.817309 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:27.817339 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:27.817610 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:27.818197 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:27.821155 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:27.822181 | orchestrator | 2025-06-01 23:43:27 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:27.822206 | orchestrator | 2025-06-01 23:43:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:30.874441 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:30.883853 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:30.888848 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:30.893056 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:30.900624 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:30.900650 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:30.902206 | orchestrator | 2025-06-01 23:43:30 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:30.902229 | orchestrator | 2025-06-01 23:43:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:34.031016 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:34.034109 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:34.035125 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:34.036609 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:34.040946 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:34.042118 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:34.042682 | orchestrator | 2025-06-01 23:43:34 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:34.042763 | orchestrator | 2025-06-01 23:43:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:37.113653 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:37.113790 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:37.113806 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:37.113864 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:37.113979 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:37.114575 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:37.118234 | orchestrator | 2025-06-01 23:43:37 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state STARTED 2025-06-01 23:43:37.118259 | orchestrator | 2025-06-01 23:43:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:40.180805 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:40.181117 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:40.186109 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:40.187219 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:40.192467 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:40.193015 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:40.194094 | orchestrator | 2025-06-01 23:43:40 | INFO  | Task 042f98b2-8e49-436d-8bc0-434e21b15384 is in state SUCCESS 2025-06-01 23:43:40.195260 | orchestrator | 2025-06-01 23:43:40.195347 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-01 23:43:40.195373 | orchestrator | 2025-06-01 23:43:40.195392 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-01 23:43:40.195412 | orchestrator | Sunday 01 June 2025 23:43:22 +0000 (0:00:00.542) 0:00:00.542 *********** 2025-06-01 23:43:40.195430 | orchestrator | changed: [testbed-manager] 2025-06-01 23:43:40.195451 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:43:40.195470 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:43:40.195489 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:43:40.195508 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:43:40.195526 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:43:40.195542 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:43:40.195553 | orchestrator | 2025-06-01 23:43:40.195564 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-01 23:43:40.195613 | orchestrator | Sunday 01 June 2025 23:43:26 +0000 (0:00:04.110) 0:00:04.653 *********** 2025-06-01 23:43:40.195625 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 23:43:40.195637 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-01 23:43:40.195648 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 23:43:40.195658 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 23:43:40.195669 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 23:43:40.195680 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 23:43:40.195690 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 23:43:40.195701 | orchestrator | 2025-06-01 23:43:40.195712 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-01 23:43:40.195723 | orchestrator | Sunday 01 June 2025 23:43:28 +0000 (0:00:02.480) 0:00:07.133 *********** 2025-06-01 23:43:40.195740 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:27.265054', 'end': '2025-06-01 23:43:27.270961', 'delta': '0:00:00.005907', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195755 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:27.167988', 'end': '2025-06-01 23:43:27.172940', 'delta': '0:00:00.004952', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195775 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:27.607268', 'end': '2025-06-01 23:43:27.615894', 'delta': '0:00:00.008626', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195818 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:27.751825', 'end': '2025-06-01 23:43:27.759732', 'delta': '0:00:00.007907', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195840 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:27.957610', 'end': '2025-06-01 23:43:27.966129', 'delta': '0:00:00.008519', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195851 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:28.246153', 'end': '2025-06-01 23:43:28.255430', 'delta': '0:00:00.009277', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195863 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-01 23:43:28.415768', 'end': '2025-06-01 23:43:28.425210', 'delta': '0:00:00.009442', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-01 23:43:40.195874 | orchestrator | 2025-06-01 23:43:40.195885 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-01 23:43:40.195896 | orchestrator | Sunday 01 June 2025 23:43:31 +0000 (0:00:02.628) 0:00:09.761 *********** 2025-06-01 23:43:40.195907 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-01 23:43:40.195918 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 23:43:40.195929 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 23:43:40.195939 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 23:43:40.195950 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 23:43:40.195986 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 23:43:40.195997 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 23:43:40.196008 | orchestrator | 2025-06-01 23:43:40.196019 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-01 23:43:40.196030 | orchestrator | Sunday 01 June 2025 23:43:33 +0000 (0:00:02.570) 0:00:12.333 *********** 2025-06-01 23:43:40.196047 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-01 23:43:40.196058 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-01 23:43:40.196083 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-01 23:43:40.196094 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-01 23:43:40.196104 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-01 23:43:40.196115 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-01 23:43:40.196126 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-01 23:43:40.196137 | orchestrator | 2025-06-01 23:43:40.196148 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:43:40.196168 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196181 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196193 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196203 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196214 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196225 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196236 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:43:40.196246 | orchestrator | 2025-06-01 23:43:40.196258 | orchestrator | 2025-06-01 23:43:40.196269 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:43:40.196280 | orchestrator | Sunday 01 June 2025 23:43:38 +0000 (0:00:04.245) 0:00:16.579 *********** 2025-06-01 23:43:40.196291 | orchestrator | =============================================================================== 2025-06-01 23:43:40.196302 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.25s 2025-06-01 23:43:40.196312 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.11s 2025-06-01 23:43:40.196323 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.63s 2025-06-01 23:43:40.196334 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.57s 2025-06-01 23:43:40.196345 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.48s 2025-06-01 23:43:40.196390 | orchestrator | 2025-06-01 23:43:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:43.245754 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:43.245895 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:43.247935 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:43.250544 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:43.257175 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:43.257220 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:43.259636 | orchestrator | 2025-06-01 23:43:43 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:43:43.260321 | orchestrator | 2025-06-01 23:43:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:46.317777 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:46.320512 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:46.320734 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:46.333414 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:46.340272 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:46.340412 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:46.340495 | orchestrator | 2025-06-01 23:43:46 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:43:46.340532 | orchestrator | 2025-06-01 23:43:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:49.406835 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:49.417869 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:49.419514 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:49.423805 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:49.430855 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:49.431418 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:49.435346 | orchestrator | 2025-06-01 23:43:49 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:43:49.435400 | orchestrator | 2025-06-01 23:43:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:52.479690 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:52.482719 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:52.486637 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:52.487580 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:52.488777 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:52.491466 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:52.492026 | orchestrator | 2025-06-01 23:43:52 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:43:52.492095 | orchestrator | 2025-06-01 23:43:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:55.548827 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:55.549101 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:55.551459 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:55.553277 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:55.555633 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:55.556270 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:55.558363 | orchestrator | 2025-06-01 23:43:55 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:43:55.559477 | orchestrator | 2025-06-01 23:43:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:43:58.660337 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:43:58.660459 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:43:58.660475 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:43:58.666725 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:43:58.666782 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:43:58.666795 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state STARTED 2025-06-01 23:43:58.666807 | orchestrator | 2025-06-01 23:43:58 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:43:58.666819 | orchestrator | 2025-06-01 23:43:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:01.731134 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:01.735539 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:01.738667 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:01.742605 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:01.744363 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:44:01.744705 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task 05896f86-f93d-473e-9acc-5781300c5f01 is in state SUCCESS 2025-06-01 23:44:01.753441 | orchestrator | 2025-06-01 23:44:01 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:01.753518 | orchestrator | 2025-06-01 23:44:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:04.794111 | orchestrator | 2025-06-01 23:44:04 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:04.794222 | orchestrator | 2025-06-01 23:44:04 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:04.799456 | orchestrator | 2025-06-01 23:44:04 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:04.799500 | orchestrator | 2025-06-01 23:44:04 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:04.801870 | orchestrator | 2025-06-01 23:44:04 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:44:04.806340 | orchestrator | 2025-06-01 23:44:04 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:04.806381 | orchestrator | 2025-06-01 23:44:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:07.881301 | orchestrator | 2025-06-01 23:44:07 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:07.883189 | orchestrator | 2025-06-01 23:44:07 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:07.883255 | orchestrator | 2025-06-01 23:44:07 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:07.887642 | orchestrator | 2025-06-01 23:44:07 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:07.889333 | orchestrator | 2025-06-01 23:44:07 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:44:07.892308 | orchestrator | 2025-06-01 23:44:07 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:07.892350 | orchestrator | 2025-06-01 23:44:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:10.989779 | orchestrator | 2025-06-01 23:44:10 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:10.996619 | orchestrator | 2025-06-01 23:44:10 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:11.011481 | orchestrator | 2025-06-01 23:44:11 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:11.011568 | orchestrator | 2025-06-01 23:44:11 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:11.011582 | orchestrator | 2025-06-01 23:44:11 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:44:11.018455 | orchestrator | 2025-06-01 23:44:11 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:11.018546 | orchestrator | 2025-06-01 23:44:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:14.083069 | orchestrator | 2025-06-01 23:44:14 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:14.085642 | orchestrator | 2025-06-01 23:44:14 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:14.087438 | orchestrator | 2025-06-01 23:44:14 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:14.087841 | orchestrator | 2025-06-01 23:44:14 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:14.090347 | orchestrator | 2025-06-01 23:44:14 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state STARTED 2025-06-01 23:44:14.094410 | orchestrator | 2025-06-01 23:44:14 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:14.094452 | orchestrator | 2025-06-01 23:44:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:17.158504 | orchestrator | 2025-06-01 23:44:17 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:17.159968 | orchestrator | 2025-06-01 23:44:17 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:17.160231 | orchestrator | 2025-06-01 23:44:17 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:17.164542 | orchestrator | 2025-06-01 23:44:17 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:17.165466 | orchestrator | 2025-06-01 23:44:17 | INFO  | Task 3a0ad679-c505-4c16-8c70-b3419ae9e5b1 is in state SUCCESS 2025-06-01 23:44:17.174196 | orchestrator | 2025-06-01 23:44:17 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:17.174229 | orchestrator | 2025-06-01 23:44:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:20.232275 | orchestrator | 2025-06-01 23:44:20 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:20.233882 | orchestrator | 2025-06-01 23:44:20 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:20.240512 | orchestrator | 2025-06-01 23:44:20 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:20.242349 | orchestrator | 2025-06-01 23:44:20 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:20.245592 | orchestrator | 2025-06-01 23:44:20 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:20.245695 | orchestrator | 2025-06-01 23:44:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:23.281647 | orchestrator | 2025-06-01 23:44:23 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:23.282770 | orchestrator | 2025-06-01 23:44:23 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:23.283502 | orchestrator | 2025-06-01 23:44:23 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:23.289331 | orchestrator | 2025-06-01 23:44:23 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:23.289388 | orchestrator | 2025-06-01 23:44:23 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:23.289401 | orchestrator | 2025-06-01 23:44:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:26.339361 | orchestrator | 2025-06-01 23:44:26 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:26.339433 | orchestrator | 2025-06-01 23:44:26 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:26.342312 | orchestrator | 2025-06-01 23:44:26 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:26.342361 | orchestrator | 2025-06-01 23:44:26 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:26.346348 | orchestrator | 2025-06-01 23:44:26 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:26.346415 | orchestrator | 2025-06-01 23:44:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:29.388970 | orchestrator | 2025-06-01 23:44:29 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:29.394858 | orchestrator | 2025-06-01 23:44:29 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:29.400251 | orchestrator | 2025-06-01 23:44:29 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:29.401059 | orchestrator | 2025-06-01 23:44:29 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:29.401944 | orchestrator | 2025-06-01 23:44:29 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:29.402119 | orchestrator | 2025-06-01 23:44:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:32.455809 | orchestrator | 2025-06-01 23:44:32 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state STARTED 2025-06-01 23:44:32.455918 | orchestrator | 2025-06-01 23:44:32 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:32.455927 | orchestrator | 2025-06-01 23:44:32 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:32.458305 | orchestrator | 2025-06-01 23:44:32 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:32.460048 | orchestrator | 2025-06-01 23:44:32 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:32.460072 | orchestrator | 2025-06-01 23:44:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:35.494988 | orchestrator | 2025-06-01 23:44:35.495118 | orchestrator | 2025-06-01 23:44:35.495136 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-01 23:44:35.495149 | orchestrator | 2025-06-01 23:44:35.495160 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-01 23:44:35.495173 | orchestrator | Sunday 01 June 2025 23:43:24 +0000 (0:00:00.864) 0:00:00.864 *********** 2025-06-01 23:44:35.495183 | orchestrator | ok: [testbed-manager] => { 2025-06-01 23:44:35.495196 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-01 23:44:35.495209 | orchestrator | } 2025-06-01 23:44:35.495220 | orchestrator | 2025-06-01 23:44:35.495232 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-01 23:44:35.495242 | orchestrator | Sunday 01 June 2025 23:43:24 +0000 (0:00:00.459) 0:00:01.323 *********** 2025-06-01 23:44:35.495253 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.495264 | orchestrator | 2025-06-01 23:44:35.495275 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-01 23:44:35.495286 | orchestrator | Sunday 01 June 2025 23:43:26 +0000 (0:00:01.801) 0:00:03.125 *********** 2025-06-01 23:44:35.495297 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-01 23:44:35.495308 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-01 23:44:35.495320 | orchestrator | 2025-06-01 23:44:35.495330 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-01 23:44:35.495341 | orchestrator | Sunday 01 June 2025 23:43:28 +0000 (0:00:01.536) 0:00:04.663 *********** 2025-06-01 23:44:35.495351 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.495362 | orchestrator | 2025-06-01 23:44:35.495373 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-01 23:44:35.495383 | orchestrator | Sunday 01 June 2025 23:43:30 +0000 (0:00:02.095) 0:00:06.758 *********** 2025-06-01 23:44:35.495394 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.495405 | orchestrator | 2025-06-01 23:44:35.495415 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-01 23:44:35.495426 | orchestrator | Sunday 01 June 2025 23:43:33 +0000 (0:00:02.867) 0:00:09.625 *********** 2025-06-01 23:44:35.495436 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-01 23:44:35.495447 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.495458 | orchestrator | 2025-06-01 23:44:35.495468 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-01 23:44:35.495479 | orchestrator | Sunday 01 June 2025 23:43:57 +0000 (0:00:24.403) 0:00:34.029 *********** 2025-06-01 23:44:35.495490 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.495500 | orchestrator | 2025-06-01 23:44:35.495511 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:44:35.495524 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.495538 | orchestrator | 2025-06-01 23:44:35.495550 | orchestrator | 2025-06-01 23:44:35.495562 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:44:35.495574 | orchestrator | Sunday 01 June 2025 23:43:59 +0000 (0:00:01.933) 0:00:35.962 *********** 2025-06-01 23:44:35.495587 | orchestrator | =============================================================================== 2025-06-01 23:44:35.495599 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.40s 2025-06-01 23:44:35.495611 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.87s 2025-06-01 23:44:35.495623 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.10s 2025-06-01 23:44:35.495635 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.93s 2025-06-01 23:44:35.495648 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.80s 2025-06-01 23:44:35.495682 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.54s 2025-06-01 23:44:35.495695 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.46s 2025-06-01 23:44:35.495708 | orchestrator | 2025-06-01 23:44:35.495720 | orchestrator | 2025-06-01 23:44:35.495734 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-01 23:44:35.495745 | orchestrator | 2025-06-01 23:44:35.495756 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-01 23:44:35.495774 | orchestrator | Sunday 01 June 2025 23:43:25 +0000 (0:00:00.600) 0:00:00.600 *********** 2025-06-01 23:44:35.495786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-01 23:44:35.495799 | orchestrator | 2025-06-01 23:44:35.495809 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-01 23:44:35.495820 | orchestrator | Sunday 01 June 2025 23:43:25 +0000 (0:00:00.499) 0:00:01.100 *********** 2025-06-01 23:44:35.495830 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-01 23:44:35.495841 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-01 23:44:35.495852 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-01 23:44:35.495863 | orchestrator | 2025-06-01 23:44:35.495873 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-01 23:44:35.495884 | orchestrator | Sunday 01 June 2025 23:43:27 +0000 (0:00:02.077) 0:00:03.177 *********** 2025-06-01 23:44:35.495895 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.495905 | orchestrator | 2025-06-01 23:44:35.495916 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-01 23:44:35.495927 | orchestrator | Sunday 01 June 2025 23:43:29 +0000 (0:00:01.757) 0:00:04.934 *********** 2025-06-01 23:44:35.495956 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-01 23:44:35.495968 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.495978 | orchestrator | 2025-06-01 23:44:35.496011 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-01 23:44:35.496030 | orchestrator | Sunday 01 June 2025 23:44:05 +0000 (0:00:35.937) 0:00:40.872 *********** 2025-06-01 23:44:35.496049 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.496069 | orchestrator | 2025-06-01 23:44:35.496088 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-01 23:44:35.496106 | orchestrator | Sunday 01 June 2025 23:44:07 +0000 (0:00:01.459) 0:00:42.332 *********** 2025-06-01 23:44:35.496117 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.496128 | orchestrator | 2025-06-01 23:44:35.496139 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-01 23:44:35.496150 | orchestrator | Sunday 01 June 2025 23:44:07 +0000 (0:00:00.858) 0:00:43.191 *********** 2025-06-01 23:44:35.496160 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.496171 | orchestrator | 2025-06-01 23:44:35.496181 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-01 23:44:35.496192 | orchestrator | Sunday 01 June 2025 23:44:10 +0000 (0:00:02.465) 0:00:45.657 *********** 2025-06-01 23:44:35.496203 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.496213 | orchestrator | 2025-06-01 23:44:35.496224 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-01 23:44:35.496235 | orchestrator | Sunday 01 June 2025 23:44:12 +0000 (0:00:02.407) 0:00:48.064 *********** 2025-06-01 23:44:35.496245 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.496256 | orchestrator | 2025-06-01 23:44:35.496267 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-01 23:44:35.496277 | orchestrator | Sunday 01 June 2025 23:44:14 +0000 (0:00:01.637) 0:00:49.702 *********** 2025-06-01 23:44:35.496288 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.496307 | orchestrator | 2025-06-01 23:44:35.496317 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:44:35.496328 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.496339 | orchestrator | 2025-06-01 23:44:35.496357 | orchestrator | 2025-06-01 23:44:35.496374 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:44:35.496391 | orchestrator | Sunday 01 June 2025 23:44:15 +0000 (0:00:00.703) 0:00:50.405 *********** 2025-06-01 23:44:35.496409 | orchestrator | =============================================================================== 2025-06-01 23:44:35.496429 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.94s 2025-06-01 23:44:35.496447 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.47s 2025-06-01 23:44:35.496463 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.41s 2025-06-01 23:44:35.496474 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.08s 2025-06-01 23:44:35.496484 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.76s 2025-06-01 23:44:35.496495 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.64s 2025-06-01 23:44:35.496505 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.46s 2025-06-01 23:44:35.496516 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.86s 2025-06-01 23:44:35.496527 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.70s 2025-06-01 23:44:35.496538 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.50s 2025-06-01 23:44:35.496548 | orchestrator | 2025-06-01 23:44:35.496559 | orchestrator | 2025-06-01 23:44:35.496570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:44:35.496580 | orchestrator | 2025-06-01 23:44:35.496591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:44:35.496602 | orchestrator | Sunday 01 June 2025 23:43:25 +0000 (0:00:00.551) 0:00:00.551 *********** 2025-06-01 23:44:35.496612 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-01 23:44:35.496623 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-01 23:44:35.496633 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-01 23:44:35.496650 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-01 23:44:35.496661 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-01 23:44:35.496672 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-01 23:44:35.496683 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-01 23:44:35.496693 | orchestrator | 2025-06-01 23:44:35.496704 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-01 23:44:35.496715 | orchestrator | 2025-06-01 23:44:35.496725 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-01 23:44:35.496736 | orchestrator | Sunday 01 June 2025 23:43:28 +0000 (0:00:03.184) 0:00:03.736 *********** 2025-06-01 23:44:35.496760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:44:35.496773 | orchestrator | 2025-06-01 23:44:35.496784 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-01 23:44:35.496795 | orchestrator | Sunday 01 June 2025 23:43:30 +0000 (0:00:02.355) 0:00:06.091 *********** 2025-06-01 23:44:35.496805 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:44:35.496816 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.496827 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:44:35.496838 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:44:35.496856 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:44:35.496874 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:44:35.496885 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:44:35.496896 | orchestrator | 2025-06-01 23:44:35.496907 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-01 23:44:35.496918 | orchestrator | Sunday 01 June 2025 23:43:33 +0000 (0:00:03.365) 0:00:09.456 *********** 2025-06-01 23:44:35.496928 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.496939 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:44:35.496950 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:44:35.496964 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:44:35.496982 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:44:35.497067 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:44:35.497086 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:44:35.497103 | orchestrator | 2025-06-01 23:44:35.497119 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-01 23:44:35.497135 | orchestrator | Sunday 01 June 2025 23:43:38 +0000 (0:00:04.154) 0:00:13.611 *********** 2025-06-01 23:44:35.497152 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:44:35.497171 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.497191 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:44:35.497209 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:44:35.497227 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:44:35.497239 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:44:35.497249 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:44:35.497259 | orchestrator | 2025-06-01 23:44:35.497270 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-01 23:44:35.497281 | orchestrator | Sunday 01 June 2025 23:43:41 +0000 (0:00:03.179) 0:00:16.791 *********** 2025-06-01 23:44:35.497292 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.497302 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:44:35.497313 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:44:35.497323 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:44:35.497334 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:44:35.497344 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:44:35.497355 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:44:35.497365 | orchestrator | 2025-06-01 23:44:35.497376 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-01 23:44:35.497387 | orchestrator | Sunday 01 June 2025 23:43:51 +0000 (0:00:09.733) 0:00:26.525 *********** 2025-06-01 23:44:35.497398 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:44:35.497408 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:44:35.497419 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:44:35.497429 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:44:35.497440 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:44:35.497451 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:44:35.497461 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.497472 | orchestrator | 2025-06-01 23:44:35.497483 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-01 23:44:35.497494 | orchestrator | Sunday 01 June 2025 23:44:07 +0000 (0:00:16.385) 0:00:42.910 *********** 2025-06-01 23:44:35.497506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:44:35.497517 | orchestrator | 2025-06-01 23:44:35.497527 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-01 23:44:35.497537 | orchestrator | Sunday 01 June 2025 23:44:09 +0000 (0:00:02.040) 0:00:44.951 *********** 2025-06-01 23:44:35.497546 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-01 23:44:35.497556 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-01 23:44:35.497566 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-01 23:44:35.497575 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-01 23:44:35.497593 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-01 23:44:35.497603 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-01 23:44:35.497613 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-01 23:44:35.497622 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-01 23:44:35.497632 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-01 23:44:35.497641 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-01 23:44:35.497651 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-01 23:44:35.497660 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-01 23:44:35.497670 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-01 23:44:35.497679 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-01 23:44:35.497689 | orchestrator | 2025-06-01 23:44:35.497699 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-01 23:44:35.497709 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:08.160) 0:00:53.112 *********** 2025-06-01 23:44:35.497718 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.497728 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:44:35.497738 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:44:35.497747 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:44:35.497757 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:44:35.497766 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:44:35.497776 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:44:35.497785 | orchestrator | 2025-06-01 23:44:35.497795 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-01 23:44:35.497804 | orchestrator | Sunday 01 June 2025 23:44:18 +0000 (0:00:01.342) 0:00:54.454 *********** 2025-06-01 23:44:35.497814 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:44:35.497823 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.497833 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:44:35.497842 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:44:35.497852 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:44:35.497861 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:44:35.497871 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:44:35.497880 | orchestrator | 2025-06-01 23:44:35.497890 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-01 23:44:35.497907 | orchestrator | Sunday 01 June 2025 23:44:20 +0000 (0:00:02.036) 0:00:56.491 *********** 2025-06-01 23:44:35.497917 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.497927 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:44:35.497937 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:44:35.497983 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:44:35.498080 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:44:35.498094 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:44:35.498104 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:44:35.498114 | orchestrator | 2025-06-01 23:44:35.498124 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-01 23:44:35.498134 | orchestrator | Sunday 01 June 2025 23:44:22 +0000 (0:00:01.858) 0:00:58.349 *********** 2025-06-01 23:44:35.498144 | orchestrator | ok: [testbed-manager] 2025-06-01 23:44:35.498154 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:44:35.498163 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:44:35.498173 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:44:35.498182 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:44:35.498192 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:44:35.498201 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:44:35.498211 | orchestrator | 2025-06-01 23:44:35.498221 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-01 23:44:35.498231 | orchestrator | Sunday 01 June 2025 23:44:25 +0000 (0:00:02.196) 0:01:00.545 *********** 2025-06-01 23:44:35.498241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-01 23:44:35.498252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:44:35.498269 | orchestrator | 2025-06-01 23:44:35.498279 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-01 23:44:35.498289 | orchestrator | Sunday 01 June 2025 23:44:27 +0000 (0:00:02.488) 0:01:03.034 *********** 2025-06-01 23:44:35.498299 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.498308 | orchestrator | 2025-06-01 23:44:35.498318 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-01 23:44:35.498328 | orchestrator | Sunday 01 June 2025 23:44:29 +0000 (0:00:02.077) 0:01:05.112 *********** 2025-06-01 23:44:35.498338 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:44:35.498350 | orchestrator | changed: [testbed-manager] 2025-06-01 23:44:35.498366 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:44:35.498383 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:44:35.498399 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:44:35.498415 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:44:35.498432 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:44:35.498448 | orchestrator | 2025-06-01 23:44:35.498465 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:44:35.498475 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498485 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498495 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498505 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498515 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498525 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498534 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:44:35.498544 | orchestrator | 2025-06-01 23:44:35.498554 | orchestrator | 2025-06-01 23:44:35.498563 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:44:35.498573 | orchestrator | Sunday 01 June 2025 23:44:32 +0000 (0:00:02.735) 0:01:07.848 *********** 2025-06-01 23:44:35.498588 | orchestrator | =============================================================================== 2025-06-01 23:44:35.498598 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.39s 2025-06-01 23:44:35.498607 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.73s 2025-06-01 23:44:35.498617 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.16s 2025-06-01 23:44:35.498627 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.15s 2025-06-01 23:44:35.498636 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.37s 2025-06-01 23:44:35.498646 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.18s 2025-06-01 23:44:35.498655 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.18s 2025-06-01 23:44:35.498665 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.74s 2025-06-01 23:44:35.498674 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.49s 2025-06-01 23:44:35.498684 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.36s 2025-06-01 23:44:35.498700 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.20s 2025-06-01 23:44:35.498717 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.08s 2025-06-01 23:44:35.498727 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.04s 2025-06-01 23:44:35.498737 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.04s 2025-06-01 23:44:35.498746 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.86s 2025-06-01 23:44:35.498756 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.34s 2025-06-01 23:44:35.498766 | orchestrator | 2025-06-01 23:44:35 | INFO  | Task f53b42ed-1837-404b-bf7f-5c3065a90f20 is in state SUCCESS 2025-06-01 23:44:35.498776 | orchestrator | 2025-06-01 23:44:35 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:35.498786 | orchestrator | 2025-06-01 23:44:35 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:35.498796 | orchestrator | 2025-06-01 23:44:35 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:35.498805 | orchestrator | 2025-06-01 23:44:35 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:35.498815 | orchestrator | 2025-06-01 23:44:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:38.538516 | orchestrator | 2025-06-01 23:44:38 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:38.539519 | orchestrator | 2025-06-01 23:44:38 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:38.542119 | orchestrator | 2025-06-01 23:44:38 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:38.544868 | orchestrator | 2025-06-01 23:44:38 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:38.544908 | orchestrator | 2025-06-01 23:44:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:41.586141 | orchestrator | 2025-06-01 23:44:41 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:41.587642 | orchestrator | 2025-06-01 23:44:41 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:41.589958 | orchestrator | 2025-06-01 23:44:41 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:41.591526 | orchestrator | 2025-06-01 23:44:41 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:41.591612 | orchestrator | 2025-06-01 23:44:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:44.632452 | orchestrator | 2025-06-01 23:44:44 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:44.634429 | orchestrator | 2025-06-01 23:44:44 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:44.636620 | orchestrator | 2025-06-01 23:44:44 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:44.640613 | orchestrator | 2025-06-01 23:44:44 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:44.640685 | orchestrator | 2025-06-01 23:44:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:47.685987 | orchestrator | 2025-06-01 23:44:47 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:47.687554 | orchestrator | 2025-06-01 23:44:47 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:47.689403 | orchestrator | 2025-06-01 23:44:47 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:47.690822 | orchestrator | 2025-06-01 23:44:47 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:47.690871 | orchestrator | 2025-06-01 23:44:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:50.741340 | orchestrator | 2025-06-01 23:44:50 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:50.742980 | orchestrator | 2025-06-01 23:44:50 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:50.744636 | orchestrator | 2025-06-01 23:44:50 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:50.748410 | orchestrator | 2025-06-01 23:44:50 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:50.748848 | orchestrator | 2025-06-01 23:44:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:53.799836 | orchestrator | 2025-06-01 23:44:53 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:53.800968 | orchestrator | 2025-06-01 23:44:53 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:53.802443 | orchestrator | 2025-06-01 23:44:53 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:53.803832 | orchestrator | 2025-06-01 23:44:53 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:53.804360 | orchestrator | 2025-06-01 23:44:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:56.859146 | orchestrator | 2025-06-01 23:44:56 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:56.860873 | orchestrator | 2025-06-01 23:44:56 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:56.865069 | orchestrator | 2025-06-01 23:44:56 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:56.867231 | orchestrator | 2025-06-01 23:44:56 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:56.867435 | orchestrator | 2025-06-01 23:44:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:44:59.965349 | orchestrator | 2025-06-01 23:44:59 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:44:59.967438 | orchestrator | 2025-06-01 23:44:59 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:44:59.970984 | orchestrator | 2025-06-01 23:44:59 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:44:59.972366 | orchestrator | 2025-06-01 23:44:59 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:44:59.972391 | orchestrator | 2025-06-01 23:44:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:03.056266 | orchestrator | 2025-06-01 23:45:03 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:03.059786 | orchestrator | 2025-06-01 23:45:03 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:03.064830 | orchestrator | 2025-06-01 23:45:03 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:03.071956 | orchestrator | 2025-06-01 23:45:03 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state STARTED 2025-06-01 23:45:03.076846 | orchestrator | 2025-06-01 23:45:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:06.138128 | orchestrator | 2025-06-01 23:45:06 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:06.139083 | orchestrator | 2025-06-01 23:45:06 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:06.141903 | orchestrator | 2025-06-01 23:45:06 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:06.142354 | orchestrator | 2025-06-01 23:45:06 | INFO  | Task 008dc52e-20b0-4df4-8b3c-7e6688121e1a is in state SUCCESS 2025-06-01 23:45:06.142375 | orchestrator | 2025-06-01 23:45:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:09.198746 | orchestrator | 2025-06-01 23:45:09 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:09.203314 | orchestrator | 2025-06-01 23:45:09 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:09.206223 | orchestrator | 2025-06-01 23:45:09 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:09.206333 | orchestrator | 2025-06-01 23:45:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:12.264941 | orchestrator | 2025-06-01 23:45:12 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:12.265068 | orchestrator | 2025-06-01 23:45:12 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:12.270287 | orchestrator | 2025-06-01 23:45:12 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:12.270358 | orchestrator | 2025-06-01 23:45:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:15.310519 | orchestrator | 2025-06-01 23:45:15 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:15.313302 | orchestrator | 2025-06-01 23:45:15 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:15.316103 | orchestrator | 2025-06-01 23:45:15 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:15.316188 | orchestrator | 2025-06-01 23:45:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:18.357248 | orchestrator | 2025-06-01 23:45:18 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:18.357363 | orchestrator | 2025-06-01 23:45:18 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:18.357385 | orchestrator | 2025-06-01 23:45:18 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:18.357403 | orchestrator | 2025-06-01 23:45:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:21.399832 | orchestrator | 2025-06-01 23:45:21 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:21.403447 | orchestrator | 2025-06-01 23:45:21 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:21.405697 | orchestrator | 2025-06-01 23:45:21 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:21.405724 | orchestrator | 2025-06-01 23:45:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:24.441436 | orchestrator | 2025-06-01 23:45:24 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:24.441717 | orchestrator | 2025-06-01 23:45:24 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:24.445743 | orchestrator | 2025-06-01 23:45:24 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:24.445776 | orchestrator | 2025-06-01 23:45:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:27.485991 | orchestrator | 2025-06-01 23:45:27 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:27.490871 | orchestrator | 2025-06-01 23:45:27 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:27.493802 | orchestrator | 2025-06-01 23:45:27 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:27.493852 | orchestrator | 2025-06-01 23:45:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:30.535757 | orchestrator | 2025-06-01 23:45:30 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:30.536433 | orchestrator | 2025-06-01 23:45:30 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:30.537709 | orchestrator | 2025-06-01 23:45:30 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:30.537875 | orchestrator | 2025-06-01 23:45:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:33.587666 | orchestrator | 2025-06-01 23:45:33 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:33.588952 | orchestrator | 2025-06-01 23:45:33 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:33.591743 | orchestrator | 2025-06-01 23:45:33 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:33.591798 | orchestrator | 2025-06-01 23:45:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:36.634974 | orchestrator | 2025-06-01 23:45:36 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:36.636784 | orchestrator | 2025-06-01 23:45:36 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:36.638533 | orchestrator | 2025-06-01 23:45:36 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:36.638617 | orchestrator | 2025-06-01 23:45:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:39.679288 | orchestrator | 2025-06-01 23:45:39 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:39.683134 | orchestrator | 2025-06-01 23:45:39 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:39.685305 | orchestrator | 2025-06-01 23:45:39 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:39.685653 | orchestrator | 2025-06-01 23:45:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:42.730126 | orchestrator | 2025-06-01 23:45:42 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state STARTED 2025-06-01 23:45:42.733114 | orchestrator | 2025-06-01 23:45:42 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:42.735281 | orchestrator | 2025-06-01 23:45:42 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:42.735348 | orchestrator | 2025-06-01 23:45:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:45.793394 | orchestrator | 2025-06-01 23:45:45.793523 | orchestrator | 2025-06-01 23:45:45.793549 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-01 23:45:45.793568 | orchestrator | 2025-06-01 23:45:45.793585 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-01 23:45:45.793602 | orchestrator | Sunday 01 June 2025 23:43:47 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-06-01 23:45:45.793619 | orchestrator | ok: [testbed-manager] 2025-06-01 23:45:45.793637 | orchestrator | 2025-06-01 23:45:45.793653 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-01 23:45:45.793670 | orchestrator | Sunday 01 June 2025 23:43:48 +0000 (0:00:00.750) 0:00:00.981 *********** 2025-06-01 23:45:45.793712 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-01 23:45:45.793728 | orchestrator | 2025-06-01 23:45:45.793744 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-01 23:45:45.793760 | orchestrator | Sunday 01 June 2025 23:43:48 +0000 (0:00:00.566) 0:00:01.547 *********** 2025-06-01 23:45:45.793776 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.793792 | orchestrator | 2025-06-01 23:45:45.793808 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-01 23:45:45.793824 | orchestrator | Sunday 01 June 2025 23:43:50 +0000 (0:00:01.399) 0:00:02.947 *********** 2025-06-01 23:45:45.793840 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-01 23:45:45.793857 | orchestrator | ok: [testbed-manager] 2025-06-01 23:45:45.793873 | orchestrator | 2025-06-01 23:45:45.793888 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-01 23:45:45.793904 | orchestrator | Sunday 01 June 2025 23:44:58 +0000 (0:01:08.462) 0:01:11.409 *********** 2025-06-01 23:45:45.793918 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.793978 | orchestrator | 2025-06-01 23:45:45.793999 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:45:45.794124 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:45:45.794163 | orchestrator | 2025-06-01 23:45:45.794189 | orchestrator | 2025-06-01 23:45:45.794207 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:45:45.794226 | orchestrator | Sunday 01 June 2025 23:45:02 +0000 (0:00:04.404) 0:01:15.814 *********** 2025-06-01 23:45:45.794244 | orchestrator | =============================================================================== 2025-06-01 23:45:45.794262 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 68.46s 2025-06-01 23:45:45.794280 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.41s 2025-06-01 23:45:45.794298 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.40s 2025-06-01 23:45:45.794314 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.75s 2025-06-01 23:45:45.794330 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.57s 2025-06-01 23:45:45.794347 | orchestrator | 2025-06-01 23:45:45.794365 | orchestrator | 2025-06-01 23:45:45.794381 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-01 23:45:45.794399 | orchestrator | 2025-06-01 23:45:45.794417 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-01 23:45:45.794429 | orchestrator | Sunday 01 June 2025 23:43:15 +0000 (0:00:00.309) 0:00:00.309 *********** 2025-06-01 23:45:45.794439 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:45:45.794450 | orchestrator | 2025-06-01 23:45:45.794459 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-01 23:45:45.794469 | orchestrator | Sunday 01 June 2025 23:43:16 +0000 (0:00:01.183) 0:00:01.492 *********** 2025-06-01 23:45:45.794478 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794488 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794504 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794520 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794535 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794561 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794578 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794609 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794627 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794643 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794659 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794669 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794678 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-01 23:45:45.794688 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794697 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794707 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794734 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794744 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-01 23:45:45.794754 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794764 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794773 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-01 23:45:45.794783 | orchestrator | 2025-06-01 23:45:45.794792 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-01 23:45:45.794802 | orchestrator | Sunday 01 June 2025 23:43:21 +0000 (0:00:04.843) 0:00:06.335 *********** 2025-06-01 23:45:45.794812 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:45:45.794823 | orchestrator | 2025-06-01 23:45:45.794832 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-01 23:45:45.794842 | orchestrator | Sunday 01 June 2025 23:43:23 +0000 (0:00:01.686) 0:00:08.022 *********** 2025-06-01 23:45:45.794857 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.794871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.794882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.794892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.794915 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.794954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.794975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.794993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.795135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.795205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.795227 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.795383 | orchestrator | 2025-06-01 23:45:45.795394 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-01 23:45:45.795404 | orchestrator | Sunday 01 June 2025 23:43:29 +0000 (0:00:05.624) 0:00:13.646 *********** 2025-06-01 23:45:45.795454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795477 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795487 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:45:45.795497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795534 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:45:45.795549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795631 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:45:45.795643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795673 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:45:45.795680 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:45:45.795687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795713 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:45:45.795721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795746 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:45:45.795753 | orchestrator | 2025-06-01 23:45:45.795759 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-01 23:45:45.795766 | orchestrator | Sunday 01 June 2025 23:43:30 +0000 (0:00:01.323) 0:00:14.970 *********** 2025-06-01 23:45:45.795776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795783 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795794 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795828 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:45:45.795835 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:45:45.795842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name'2025-06-01 23:45:45 | INFO  | Task 9eda31de-c9e1-4de5-b511-73159b39c08d is in state SUCCESS 2025-06-01 23:45:45.795887 | orchestrator | : 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795911 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:45:45.795921 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:45:45.795938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.795974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.795984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.796002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.796112 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:45:45.796124 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:45:45.796136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-01 23:45:45.796158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.796171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.796182 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:45:45.796193 | orchestrator | 2025-06-01 23:45:45.796204 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-01 23:45:45.796215 | orchestrator | Sunday 01 June 2025 23:43:34 +0000 (0:00:03.619) 0:00:18.589 *********** 2025-06-01 23:45:45.796226 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:45:45.796236 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:45:45.796246 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:45:45.796256 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:45:45.796267 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:45:45.796278 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:45:45.796289 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:45:45.796299 | orchestrator | 2025-06-01 23:45:45.796310 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-01 23:45:45.796321 | orchestrator | Sunday 01 June 2025 23:43:35 +0000 (0:00:01.602) 0:00:20.192 *********** 2025-06-01 23:45:45.796331 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:45:45.796342 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:45:45.796352 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:45:45.796362 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:45:45.796373 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:45:45.796384 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:45:45.796394 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:45:45.796405 | orchestrator | 2025-06-01 23:45:45.796416 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-01 23:45:45.796434 | orchestrator | Sunday 01 June 2025 23:43:36 +0000 (0:00:01.338) 0:00:21.531 *********** 2025-06-01 23:45:45.796450 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796462 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796547 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.796620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.796708 | orchestrator | 2025-06-01 23:45:45.796718 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-01 23:45:45.796730 | orchestrator | Sunday 01 June 2025 23:43:43 +0000 (0:00:06.150) 0:00:27.681 *********** 2025-06-01 23:45:45.796742 | orchestrator | [WARNING]: Skipped 2025-06-01 23:45:45.796754 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-01 23:45:45.796765 | orchestrator | to this access issue: 2025-06-01 23:45:45.796776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-01 23:45:45.796787 | orchestrator | directory 2025-06-01 23:45:45.796799 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:45:45.796810 | orchestrator | 2025-06-01 23:45:45.796821 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-01 23:45:45.796833 | orchestrator | Sunday 01 June 2025 23:43:44 +0000 (0:00:01.544) 0:00:29.226 *********** 2025-06-01 23:45:45.796844 | orchestrator | [WARNING]: Skipped 2025-06-01 23:45:45.796855 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-01 23:45:45.796866 | orchestrator | to this access issue: 2025-06-01 23:45:45.796878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-01 23:45:45.796889 | orchestrator | directory 2025-06-01 23:45:45.796902 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:45:45.796913 | orchestrator | 2025-06-01 23:45:45.796923 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-01 23:45:45.796942 | orchestrator | Sunday 01 June 2025 23:43:45 +0000 (0:00:01.231) 0:00:30.458 *********** 2025-06-01 23:45:45.796953 | orchestrator | [WARNING]: Skipped 2025-06-01 23:45:45.796964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-01 23:45:45.796976 | orchestrator | to this access issue: 2025-06-01 23:45:45.796987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-01 23:45:45.796998 | orchestrator | directory 2025-06-01 23:45:45.797008 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:45:45.797037 | orchestrator | 2025-06-01 23:45:45.797049 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-01 23:45:45.797060 | orchestrator | Sunday 01 June 2025 23:43:46 +0000 (0:00:00.844) 0:00:31.303 *********** 2025-06-01 23:45:45.797071 | orchestrator | [WARNING]: Skipped 2025-06-01 23:45:45.797086 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-01 23:45:45.797098 | orchestrator | to this access issue: 2025-06-01 23:45:45.797109 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-01 23:45:45.797119 | orchestrator | directory 2025-06-01 23:45:45.797129 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-01 23:45:45.797140 | orchestrator | 2025-06-01 23:45:45.797151 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-01 23:45:45.797162 | orchestrator | Sunday 01 June 2025 23:43:47 +0000 (0:00:01.076) 0:00:32.380 *********** 2025-06-01 23:45:45.797173 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.797183 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.797195 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.797206 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.797217 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.797228 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.797240 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.797250 | orchestrator | 2025-06-01 23:45:45.797262 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-01 23:45:45.797273 | orchestrator | Sunday 01 June 2025 23:43:51 +0000 (0:00:04.140) 0:00:36.521 *********** 2025-06-01 23:45:45.797284 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797307 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797319 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797330 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797341 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797351 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797362 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-01 23:45:45.797373 | orchestrator | 2025-06-01 23:45:45.797384 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-01 23:45:45.797395 | orchestrator | Sunday 01 June 2025 23:43:54 +0000 (0:00:02.800) 0:00:39.321 *********** 2025-06-01 23:45:45.797406 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.797417 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.797428 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.797439 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.797450 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.797461 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.797472 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.797482 | orchestrator | 2025-06-01 23:45:45.797493 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-01 23:45:45.797512 | orchestrator | Sunday 01 June 2025 23:43:58 +0000 (0:00:03.375) 0:00:42.697 *********** 2025-06-01 23:45:45.797524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797548 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797577 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797607 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797652 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797663 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797680 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797709 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797721 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797739 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797762 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:45:45.797789 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797800 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.797812 | orchestrator | 2025-06-01 23:45:45.797823 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-01 23:45:45.797834 | orchestrator | Sunday 01 June 2025 23:44:00 +0000 (0:00:02.770) 0:00:45.467 *********** 2025-06-01 23:45:45.797846 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797857 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797871 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797883 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797890 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797896 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-01 23:45:45.797903 | orchestrator | 2025-06-01 23:45:45.797910 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-01 23:45:45.797916 | orchestrator | Sunday 01 June 2025 23:44:04 +0000 (0:00:03.237) 0:00:48.705 *********** 2025-06-01 23:45:45.797923 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797936 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797943 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797950 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797956 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797963 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-01 23:45:45.797970 | orchestrator | 2025-06-01 23:45:45.797976 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-01 23:45:45.797983 | orchestrator | Sunday 01 June 2025 23:44:08 +0000 (0:00:04.478) 0:00:53.184 *********** 2025-06-01 23:45:45.797990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.797998 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.798005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.798079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798104 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.798151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798159 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.798173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.798230 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-01 23:45:45.798254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:45:45.798348 | orchestrator | 2025-06-01 23:45:45.798359 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-01 23:45:45.798369 | orchestrator | Sunday 01 June 2025 23:44:13 +0000 (0:00:04.763) 0:00:57.947 *********** 2025-06-01 23:45:45.798380 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.798390 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.798401 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.798413 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.798424 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.798434 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.798445 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.798455 | orchestrator | 2025-06-01 23:45:45.798465 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-01 23:45:45.798477 | orchestrator | Sunday 01 June 2025 23:44:15 +0000 (0:00:02.351) 0:01:00.298 *********** 2025-06-01 23:45:45.798487 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.798499 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.798510 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.798521 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.798532 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.798543 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.798554 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.798565 | orchestrator | 2025-06-01 23:45:45.798575 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798587 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:01.762) 0:01:02.061 *********** 2025-06-01 23:45:45.798597 | orchestrator | 2025-06-01 23:45:45.798609 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798620 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:00.071) 0:01:02.132 *********** 2025-06-01 23:45:45.798630 | orchestrator | 2025-06-01 23:45:45.798641 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798652 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:00.073) 0:01:02.206 *********** 2025-06-01 23:45:45.798663 | orchestrator | 2025-06-01 23:45:45.798674 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798685 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:00.090) 0:01:02.296 *********** 2025-06-01 23:45:45.798696 | orchestrator | 2025-06-01 23:45:45.798707 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798718 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:00.066) 0:01:02.363 *********** 2025-06-01 23:45:45.798729 | orchestrator | 2025-06-01 23:45:45.798739 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798751 | orchestrator | Sunday 01 June 2025 23:44:18 +0000 (0:00:00.190) 0:01:02.553 *********** 2025-06-01 23:45:45.798762 | orchestrator | 2025-06-01 23:45:45.798773 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-01 23:45:45.798784 | orchestrator | Sunday 01 June 2025 23:44:18 +0000 (0:00:00.070) 0:01:02.623 *********** 2025-06-01 23:45:45.798804 | orchestrator | 2025-06-01 23:45:45.798865 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-01 23:45:45.798877 | orchestrator | Sunday 01 June 2025 23:44:18 +0000 (0:00:00.081) 0:01:02.705 *********** 2025-06-01 23:45:45.798884 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.798892 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.798903 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.798913 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.798924 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.798935 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.798946 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.798957 | orchestrator | 2025-06-01 23:45:45.798968 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-01 23:45:45.798979 | orchestrator | Sunday 01 June 2025 23:44:58 +0000 (0:00:40.502) 0:01:43.207 *********** 2025-06-01 23:45:45.798989 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.799000 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.799056 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.799071 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.799082 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.799093 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.799104 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.799115 | orchestrator | 2025-06-01 23:45:45.799125 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-01 23:45:45.799137 | orchestrator | Sunday 01 June 2025 23:45:37 +0000 (0:00:39.320) 0:02:22.528 *********** 2025-06-01 23:45:45.799154 | orchestrator | ok: [testbed-manager] 2025-06-01 23:45:45.799166 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:45:45.799178 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:45:45.799188 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:45:45.799199 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:45:45.799210 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:45:45.799221 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:45:45.799231 | orchestrator | 2025-06-01 23:45:45.799243 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-01 23:45:45.799254 | orchestrator | Sunday 01 June 2025 23:45:39 +0000 (0:00:01.973) 0:02:24.501 *********** 2025-06-01 23:45:45.799265 | orchestrator | changed: [testbed-manager] 2025-06-01 23:45:45.799276 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:45:45.799287 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:45:45.799298 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:45:45.799309 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:45:45.799320 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:45:45.799331 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:45:45.799342 | orchestrator | 2025-06-01 23:45:45.799353 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:45:45.799364 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799384 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799396 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799406 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799417 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799428 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799448 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-01 23:45:45.799460 | orchestrator | 2025-06-01 23:45:45.799471 | orchestrator | 2025-06-01 23:45:45.799482 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:45:45.799493 | orchestrator | Sunday 01 June 2025 23:45:44 +0000 (0:00:04.478) 0:02:28.980 *********** 2025-06-01 23:45:45.799504 | orchestrator | =============================================================================== 2025-06-01 23:45:45.799515 | orchestrator | common : Restart fluentd container ------------------------------------- 40.50s 2025-06-01 23:45:45.799525 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.32s 2025-06-01 23:45:45.799536 | orchestrator | common : Copying over config.json files for services -------------------- 6.15s 2025-06-01 23:45:45.799583 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.62s 2025-06-01 23:45:45.799595 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.84s 2025-06-01 23:45:45.799606 | orchestrator | common : Check common containers ---------------------------------------- 4.76s 2025-06-01 23:45:45.799618 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.48s 2025-06-01 23:45:45.799628 | orchestrator | common : Restart cron container ----------------------------------------- 4.48s 2025-06-01 23:45:45.799639 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.14s 2025-06-01 23:45:45.799652 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.62s 2025-06-01 23:45:45.799662 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.38s 2025-06-01 23:45:45.799673 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.24s 2025-06-01 23:45:45.799683 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.80s 2025-06-01 23:45:45.799694 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.77s 2025-06-01 23:45:45.799705 | orchestrator | common : Creating log volume -------------------------------------------- 2.35s 2025-06-01 23:45:45.799716 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2025-06-01 23:45:45.799723 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.76s 2025-06-01 23:45:45.799729 | orchestrator | common : include_tasks -------------------------------------------------- 1.69s 2025-06-01 23:45:45.799736 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.60s 2025-06-01 23:45:45.799743 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.54s 2025-06-01 23:45:45.799882 | orchestrator | 2025-06-01 23:45:45 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:45.799894 | orchestrator | 2025-06-01 23:45:45 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:45.799901 | orchestrator | 2025-06-01 23:45:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:48.854481 | orchestrator | 2025-06-01 23:45:48 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:45:48.854986 | orchestrator | 2025-06-01 23:45:48 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state STARTED 2025-06-01 23:45:48.857713 | orchestrator | 2025-06-01 23:45:48 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:45:48.862127 | orchestrator | 2025-06-01 23:45:48 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:48.862658 | orchestrator | 2025-06-01 23:45:48 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:45:48.863418 | orchestrator | 2025-06-01 23:45:48 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:48.864385 | orchestrator | 2025-06-01 23:45:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:51.902478 | orchestrator | 2025-06-01 23:45:51 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:45:51.902700 | orchestrator | 2025-06-01 23:45:51 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state STARTED 2025-06-01 23:45:51.903353 | orchestrator | 2025-06-01 23:45:51 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:45:51.904151 | orchestrator | 2025-06-01 23:45:51 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:51.906842 | orchestrator | 2025-06-01 23:45:51 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:45:51.916758 | orchestrator | 2025-06-01 23:45:51 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:51.916812 | orchestrator | 2025-06-01 23:45:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:54.940422 | orchestrator | 2025-06-01 23:45:54 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:45:54.940593 | orchestrator | 2025-06-01 23:45:54 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state STARTED 2025-06-01 23:45:54.944157 | orchestrator | 2025-06-01 23:45:54 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:45:54.944527 | orchestrator | 2025-06-01 23:45:54 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:54.947784 | orchestrator | 2025-06-01 23:45:54 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:45:54.951442 | orchestrator | 2025-06-01 23:45:54 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:54.951475 | orchestrator | 2025-06-01 23:45:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:45:57.992127 | orchestrator | 2025-06-01 23:45:57 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:45:57.992235 | orchestrator | 2025-06-01 23:45:57 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state STARTED 2025-06-01 23:45:57.992251 | orchestrator | 2025-06-01 23:45:57 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:45:57.992973 | orchestrator | 2025-06-01 23:45:57 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:45:57.993588 | orchestrator | 2025-06-01 23:45:57 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:45:57.999697 | orchestrator | 2025-06-01 23:45:57 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:45:57.999791 | orchestrator | 2025-06-01 23:45:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:01.039843 | orchestrator | 2025-06-01 23:46:01 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:01.042950 | orchestrator | 2025-06-01 23:46:01 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state STARTED 2025-06-01 23:46:01.043465 | orchestrator | 2025-06-01 23:46:01 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:01.046548 | orchestrator | 2025-06-01 23:46:01 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:01.046583 | orchestrator | 2025-06-01 23:46:01 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:01.046595 | orchestrator | 2025-06-01 23:46:01 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:01.046607 | orchestrator | 2025-06-01 23:46:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:04.090972 | orchestrator | 2025-06-01 23:46:04 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:04.097558 | orchestrator | 2025-06-01 23:46:04 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state STARTED 2025-06-01 23:46:04.101157 | orchestrator | 2025-06-01 23:46:04 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:04.104325 | orchestrator | 2025-06-01 23:46:04 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:04.107244 | orchestrator | 2025-06-01 23:46:04 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:04.113452 | orchestrator | 2025-06-01 23:46:04 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:04.113508 | orchestrator | 2025-06-01 23:46:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:07.146700 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:07.149342 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:07.149874 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task 865e5c00-0883-4473-bb8e-6d549fe83443 is in state SUCCESS 2025-06-01 23:46:07.150751 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:07.152147 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:07.153165 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:07.154468 | orchestrator | 2025-06-01 23:46:07 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:07.154505 | orchestrator | 2025-06-01 23:46:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:10.194254 | orchestrator | 2025-06-01 23:46:10 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:10.195197 | orchestrator | 2025-06-01 23:46:10 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:10.197206 | orchestrator | 2025-06-01 23:46:10 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:10.203293 | orchestrator | 2025-06-01 23:46:10 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:10.204792 | orchestrator | 2025-06-01 23:46:10 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:10.207088 | orchestrator | 2025-06-01 23:46:10 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:10.207201 | orchestrator | 2025-06-01 23:46:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:13.254793 | orchestrator | 2025-06-01 23:46:13 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:13.256865 | orchestrator | 2025-06-01 23:46:13 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:13.259416 | orchestrator | 2025-06-01 23:46:13 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:13.260109 | orchestrator | 2025-06-01 23:46:13 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:13.262005 | orchestrator | 2025-06-01 23:46:13 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:13.269736 | orchestrator | 2025-06-01 23:46:13 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:13.269822 | orchestrator | 2025-06-01 23:46:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:16.326513 | orchestrator | 2025-06-01 23:46:16 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:16.326790 | orchestrator | 2025-06-01 23:46:16 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:16.327199 | orchestrator | 2025-06-01 23:46:16 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:16.329587 | orchestrator | 2025-06-01 23:46:16 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:16.352447 | orchestrator | 2025-06-01 23:46:16 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:16.352544 | orchestrator | 2025-06-01 23:46:16 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:16.352578 | orchestrator | 2025-06-01 23:46:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:19.382298 | orchestrator | 2025-06-01 23:46:19 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:19.384378 | orchestrator | 2025-06-01 23:46:19 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:19.386589 | orchestrator | 2025-06-01 23:46:19 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:19.388765 | orchestrator | 2025-06-01 23:46:19 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:19.391899 | orchestrator | 2025-06-01 23:46:19 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:19.393256 | orchestrator | 2025-06-01 23:46:19 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:19.393372 | orchestrator | 2025-06-01 23:46:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:22.419870 | orchestrator | 2025-06-01 23:46:22 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state STARTED 2025-06-01 23:46:22.420139 | orchestrator | 2025-06-01 23:46:22 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:22.420741 | orchestrator | 2025-06-01 23:46:22 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:22.421404 | orchestrator | 2025-06-01 23:46:22 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:22.421992 | orchestrator | 2025-06-01 23:46:22 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:22.422617 | orchestrator | 2025-06-01 23:46:22 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:22.422724 | orchestrator | 2025-06-01 23:46:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:25.461901 | orchestrator | 2025-06-01 23:46:25 | INFO  | Task edc31372-5266-4c4e-8fd0-bbd69fa017d4 is in state SUCCESS 2025-06-01 23:46:25.462968 | orchestrator | 2025-06-01 23:46:25.463065 | orchestrator | 2025-06-01 23:46:25.463080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:46:25.463088 | orchestrator | 2025-06-01 23:46:25.463096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:46:25.463105 | orchestrator | Sunday 01 June 2025 23:45:53 +0000 (0:00:00.657) 0:00:00.657 *********** 2025-06-01 23:46:25.463112 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:46:25.463122 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:46:25.463128 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:46:25.463136 | orchestrator | 2025-06-01 23:46:25.463142 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:46:25.463178 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:00.418) 0:00:01.076 *********** 2025-06-01 23:46:25.463188 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-01 23:46:25.463195 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-01 23:46:25.463201 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-01 23:46:25.463207 | orchestrator | 2025-06-01 23:46:25.463214 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-01 23:46:25.463220 | orchestrator | 2025-06-01 23:46:25.463227 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-01 23:46:25.463234 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:00.439) 0:00:01.515 *********** 2025-06-01 23:46:25.463242 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:46:25.463249 | orchestrator | 2025-06-01 23:46:25.463254 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-01 23:46:25.463257 | orchestrator | Sunday 01 June 2025 23:45:55 +0000 (0:00:00.661) 0:00:02.177 *********** 2025-06-01 23:46:25.463261 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-01 23:46:25.463266 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-01 23:46:25.463269 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-01 23:46:25.463273 | orchestrator | 2025-06-01 23:46:25.463277 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-01 23:46:25.463281 | orchestrator | Sunday 01 June 2025 23:45:56 +0000 (0:00:01.018) 0:00:03.195 *********** 2025-06-01 23:46:25.463284 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-01 23:46:25.463288 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-01 23:46:25.463292 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-01 23:46:25.463295 | orchestrator | 2025-06-01 23:46:25.463299 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-01 23:46:25.463303 | orchestrator | Sunday 01 June 2025 23:45:58 +0000 (0:00:02.611) 0:00:05.806 *********** 2025-06-01 23:46:25.463307 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:46:25.463312 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:46:25.463317 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:46:25.463321 | orchestrator | 2025-06-01 23:46:25.463325 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-01 23:46:25.463329 | orchestrator | Sunday 01 June 2025 23:46:01 +0000 (0:00:02.205) 0:00:08.012 *********** 2025-06-01 23:46:25.463333 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:46:25.463336 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:46:25.463340 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:46:25.463344 | orchestrator | 2025-06-01 23:46:25.463358 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:46:25.463362 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:46:25.463367 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:46:25.463371 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:46:25.463375 | orchestrator | 2025-06-01 23:46:25.463379 | orchestrator | 2025-06-01 23:46:25.463383 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:46:25.463389 | orchestrator | Sunday 01 June 2025 23:46:03 +0000 (0:00:02.822) 0:00:10.834 *********** 2025-06-01 23:46:25.463396 | orchestrator | =============================================================================== 2025-06-01 23:46:25.463403 | orchestrator | memcached : Restart memcached container --------------------------------- 2.82s 2025-06-01 23:46:25.463414 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.61s 2025-06-01 23:46:25.463421 | orchestrator | memcached : Check memcached container ----------------------------------- 2.21s 2025-06-01 23:46:25.463427 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.02s 2025-06-01 23:46:25.463434 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.66s 2025-06-01 23:46:25.463438 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-06-01 23:46:25.463442 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-06-01 23:46:25.463446 | orchestrator | 2025-06-01 23:46:25.463449 | orchestrator | 2025-06-01 23:46:25.463453 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:46:25.463457 | orchestrator | 2025-06-01 23:46:25.463460 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:46:25.463464 | orchestrator | Sunday 01 June 2025 23:45:52 +0000 (0:00:00.440) 0:00:00.440 *********** 2025-06-01 23:46:25.463468 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:46:25.463472 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:46:25.463475 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:46:25.463479 | orchestrator | 2025-06-01 23:46:25.463483 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:46:25.463527 | orchestrator | Sunday 01 June 2025 23:45:52 +0000 (0:00:00.865) 0:00:01.305 *********** 2025-06-01 23:46:25.463532 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-01 23:46:25.463536 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-01 23:46:25.463540 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-01 23:46:25.463544 | orchestrator | 2025-06-01 23:46:25.463548 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-01 23:46:25.463552 | orchestrator | 2025-06-01 23:46:25.463556 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-01 23:46:25.463560 | orchestrator | Sunday 01 June 2025 23:45:53 +0000 (0:00:00.894) 0:00:02.199 *********** 2025-06-01 23:46:25.463564 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:46:25.463569 | orchestrator | 2025-06-01 23:46:25.463573 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-01 23:46:25.463577 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:00.701) 0:00:02.900 *********** 2025-06-01 23:46:25.463585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463631 | orchestrator | 2025-06-01 23:46:25.463636 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-01 23:46:25.463640 | orchestrator | Sunday 01 June 2025 23:45:55 +0000 (0:00:01.398) 0:00:04.298 *********** 2025-06-01 23:46:25.463645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463681 | orchestrator | 2025-06-01 23:46:25.463685 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-01 23:46:25.463690 | orchestrator | Sunday 01 June 2025 23:45:59 +0000 (0:00:03.806) 0:00:08.105 *********** 2025-06-01 23:46:25.463695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463767 | orchestrator | 2025-06-01 23:46:25.463780 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-01 23:46:25.463788 | orchestrator | Sunday 01 June 2025 23:46:03 +0000 (0:00:03.353) 0:00:11.461 *********** 2025-06-01 23:46:25.463793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-01 23:46:25.463841 | orchestrator | 2025-06-01 23:46:25.463847 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 23:46:25.463853 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:02.324) 0:00:13.786 *********** 2025-06-01 23:46:25.463860 | orchestrator | 2025-06-01 23:46:25.463866 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 23:46:25.463876 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:00.185) 0:00:13.971 *********** 2025-06-01 23:46:25.463882 | orchestrator | 2025-06-01 23:46:25.463889 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-01 23:46:25.463896 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:00.141) 0:00:14.113 *********** 2025-06-01 23:46:25.463902 | orchestrator | 2025-06-01 23:46:25.463908 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-01 23:46:25.463915 | orchestrator | Sunday 01 June 2025 23:46:06 +0000 (0:00:00.260) 0:00:14.374 *********** 2025-06-01 23:46:25.463922 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:46:25.463929 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:46:25.463935 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:46:25.463942 | orchestrator | 2025-06-01 23:46:25.463949 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-01 23:46:25.463956 | orchestrator | Sunday 01 June 2025 23:46:15 +0000 (0:00:09.554) 0:00:23.929 *********** 2025-06-01 23:46:25.463962 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:46:25.463969 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:46:25.463976 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:46:25.463988 | orchestrator | 2025-06-01 23:46:25.463995 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:46:25.464002 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:46:25.464009 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:46:25.464034 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:46:25.464041 | orchestrator | 2025-06-01 23:46:25.464047 | orchestrator | 2025-06-01 23:46:25.464053 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:46:25.464060 | orchestrator | Sunday 01 June 2025 23:46:24 +0000 (0:00:08.527) 0:00:32.456 *********** 2025-06-01 23:46:25.464067 | orchestrator | =============================================================================== 2025-06-01 23:46:25.464073 | orchestrator | redis : Restart redis container ----------------------------------------- 9.55s 2025-06-01 23:46:25.464079 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.53s 2025-06-01 23:46:25.464085 | orchestrator | redis : Copying over default config.json files -------------------------- 3.81s 2025-06-01 23:46:25.464091 | orchestrator | redis : Copying over redis config files --------------------------------- 3.35s 2025-06-01 23:46:25.464097 | orchestrator | redis : Check redis containers ------------------------------------------ 2.32s 2025-06-01 23:46:25.464103 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.40s 2025-06-01 23:46:25.464110 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-06-01 23:46:25.464116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2025-06-01 23:46:25.464123 | orchestrator | redis : include_tasks --------------------------------------------------- 0.70s 2025-06-01 23:46:25.464133 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.59s 2025-06-01 23:46:25.464254 | orchestrator | 2025-06-01 23:46:25 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:25.464292 | orchestrator | 2025-06-01 23:46:25 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:25.469898 | orchestrator | 2025-06-01 23:46:25 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:25.472866 | orchestrator | 2025-06-01 23:46:25 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:25.474711 | orchestrator | 2025-06-01 23:46:25 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:25.474827 | orchestrator | 2025-06-01 23:46:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:28.509791 | orchestrator | 2025-06-01 23:46:28 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:28.510160 | orchestrator | 2025-06-01 23:46:28 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:28.510859 | orchestrator | 2025-06-01 23:46:28 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:28.511671 | orchestrator | 2025-06-01 23:46:28 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:28.512558 | orchestrator | 2025-06-01 23:46:28 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:28.512617 | orchestrator | 2025-06-01 23:46:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:31.549115 | orchestrator | 2025-06-01 23:46:31 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:31.549385 | orchestrator | 2025-06-01 23:46:31 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:31.550336 | orchestrator | 2025-06-01 23:46:31 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:31.551061 | orchestrator | 2025-06-01 23:46:31 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:31.554417 | orchestrator | 2025-06-01 23:46:31 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:31.554454 | orchestrator | 2025-06-01 23:46:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:34.592409 | orchestrator | 2025-06-01 23:46:34 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:34.595611 | orchestrator | 2025-06-01 23:46:34 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:34.595964 | orchestrator | 2025-06-01 23:46:34 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:34.596620 | orchestrator | 2025-06-01 23:46:34 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:34.597297 | orchestrator | 2025-06-01 23:46:34 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:34.599344 | orchestrator | 2025-06-01 23:46:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:37.623854 | orchestrator | 2025-06-01 23:46:37 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:37.626523 | orchestrator | 2025-06-01 23:46:37 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:37.627047 | orchestrator | 2025-06-01 23:46:37 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:37.627681 | orchestrator | 2025-06-01 23:46:37 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:37.628222 | orchestrator | 2025-06-01 23:46:37 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:37.628281 | orchestrator | 2025-06-01 23:46:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:40.658605 | orchestrator | 2025-06-01 23:46:40 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:40.659232 | orchestrator | 2025-06-01 23:46:40 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:40.660241 | orchestrator | 2025-06-01 23:46:40 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:40.661188 | orchestrator | 2025-06-01 23:46:40 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:40.662314 | orchestrator | 2025-06-01 23:46:40 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:40.662349 | orchestrator | 2025-06-01 23:46:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:43.710808 | orchestrator | 2025-06-01 23:46:43 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:43.711148 | orchestrator | 2025-06-01 23:46:43 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:43.713934 | orchestrator | 2025-06-01 23:46:43 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:43.714660 | orchestrator | 2025-06-01 23:46:43 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:43.715528 | orchestrator | 2025-06-01 23:46:43 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:43.715627 | orchestrator | 2025-06-01 23:46:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:46.752952 | orchestrator | 2025-06-01 23:46:46 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:46.753230 | orchestrator | 2025-06-01 23:46:46 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:46.753845 | orchestrator | 2025-06-01 23:46:46 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:46.754977 | orchestrator | 2025-06-01 23:46:46 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:46.755223 | orchestrator | 2025-06-01 23:46:46 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:46.755250 | orchestrator | 2025-06-01 23:46:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:49.800743 | orchestrator | 2025-06-01 23:46:49 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:49.802530 | orchestrator | 2025-06-01 23:46:49 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:49.805320 | orchestrator | 2025-06-01 23:46:49 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:49.807570 | orchestrator | 2025-06-01 23:46:49 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:49.809606 | orchestrator | 2025-06-01 23:46:49 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:49.809962 | orchestrator | 2025-06-01 23:46:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:52.849278 | orchestrator | 2025-06-01 23:46:52 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:52.849448 | orchestrator | 2025-06-01 23:46:52 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:52.849549 | orchestrator | 2025-06-01 23:46:52 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:52.851490 | orchestrator | 2025-06-01 23:46:52 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:52.852250 | orchestrator | 2025-06-01 23:46:52 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:52.852466 | orchestrator | 2025-06-01 23:46:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:55.892957 | orchestrator | 2025-06-01 23:46:55 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:55.894734 | orchestrator | 2025-06-01 23:46:55 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:55.894812 | orchestrator | 2025-06-01 23:46:55 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:55.894825 | orchestrator | 2025-06-01 23:46:55 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:55.894908 | orchestrator | 2025-06-01 23:46:55 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:55.894923 | orchestrator | 2025-06-01 23:46:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:46:58.935808 | orchestrator | 2025-06-01 23:46:58 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:46:58.936396 | orchestrator | 2025-06-01 23:46:58 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:46:58.940097 | orchestrator | 2025-06-01 23:46:58 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:46:58.942285 | orchestrator | 2025-06-01 23:46:58 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:46:58.944298 | orchestrator | 2025-06-01 23:46:58 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:46:58.944333 | orchestrator | 2025-06-01 23:46:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:01.987470 | orchestrator | 2025-06-01 23:47:01 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:01.989200 | orchestrator | 2025-06-01 23:47:01 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state STARTED 2025-06-01 23:47:01.990221 | orchestrator | 2025-06-01 23:47:01 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:01.991494 | orchestrator | 2025-06-01 23:47:01 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:01.992608 | orchestrator | 2025-06-01 23:47:01 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:01.992635 | orchestrator | 2025-06-01 23:47:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:05.033328 | orchestrator | 2025-06-01 23:47:05 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:05.037741 | orchestrator | 2025-06-01 23:47:05.037785 | orchestrator | 2025-06-01 23:47:05.037795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:47:05.037803 | orchestrator | 2025-06-01 23:47:05.037811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:47:05.037819 | orchestrator | Sunday 01 June 2025 23:45:53 +0000 (0:00:00.497) 0:00:00.497 *********** 2025-06-01 23:47:05.037827 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:05.037836 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:05.037843 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:05.037851 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:05.037859 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:05.037866 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:05.037874 | orchestrator | 2025-06-01 23:47:05.037881 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:47:05.037889 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:00.980) 0:00:01.477 *********** 2025-06-01 23:47:05.037897 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 23:47:05.037905 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 23:47:05.037912 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 23:47:05.037920 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 23:47:05.037927 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 23:47:05.037935 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-01 23:47:05.037942 | orchestrator | 2025-06-01 23:47:05.037949 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-01 23:47:05.037957 | orchestrator | 2025-06-01 23:47:05.037964 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-01 23:47:05.037971 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:00.819) 0:00:02.296 *********** 2025-06-01 23:47:05.037980 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:47:05.037989 | orchestrator | 2025-06-01 23:47:05.037997 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 23:47:05.038073 | orchestrator | Sunday 01 June 2025 23:45:56 +0000 (0:00:01.962) 0:00:04.258 *********** 2025-06-01 23:47:05.038087 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-01 23:47:05.038095 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-01 23:47:05.038120 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-01 23:47:05.038128 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-01 23:47:05.038135 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-01 23:47:05.038142 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-01 23:47:05.038149 | orchestrator | 2025-06-01 23:47:05.038157 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 23:47:05.038164 | orchestrator | Sunday 01 June 2025 23:45:58 +0000 (0:00:01.997) 0:00:06.256 *********** 2025-06-01 23:47:05.038171 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-01 23:47:05.038179 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-01 23:47:05.038186 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-01 23:47:05.038193 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-01 23:47:05.038200 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-01 23:47:05.038207 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-01 23:47:05.038214 | orchestrator | 2025-06-01 23:47:05.038222 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 23:47:05.038229 | orchestrator | Sunday 01 June 2025 23:46:01 +0000 (0:00:02.595) 0:00:08.851 *********** 2025-06-01 23:47:05.038236 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-01 23:47:05.038243 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:05.038251 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-01 23:47:05.038258 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:05.038265 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-01 23:47:05.038273 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:05.038280 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-01 23:47:05.038298 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:05.038305 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-01 23:47:05.038313 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:05.038320 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-01 23:47:05.038327 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:05.038335 | orchestrator | 2025-06-01 23:47:05.038344 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-01 23:47:05.038353 | orchestrator | Sunday 01 June 2025 23:46:03 +0000 (0:00:02.006) 0:00:10.858 *********** 2025-06-01 23:47:05.038361 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:05.038369 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:05.038378 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:05.038386 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:05.038395 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:05.038403 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:05.038411 | orchestrator | 2025-06-01 23:47:05.038420 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-01 23:47:05.038428 | orchestrator | Sunday 01 June 2025 23:46:04 +0000 (0:00:01.133) 0:00:11.991 *********** 2025-06-01 23:47:05.038452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038599 | orchestrator | 2025-06-01 23:47:05.038608 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-01 23:47:05.038616 | orchestrator | Sunday 01 June 2025 23:46:07 +0000 (0:00:02.624) 0:00:14.616 *********** 2025-06-01 23:47:05.038625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038762 | orchestrator | 2025-06-01 23:47:05.038770 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-01 23:47:05.038778 | orchestrator | Sunday 01 June 2025 23:46:11 +0000 (0:00:04.125) 0:00:18.742 *********** 2025-06-01 23:47:05.038785 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:05.038792 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:05.038800 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:05.038807 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:05.038814 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:05.038821 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:05.038829 | orchestrator | 2025-06-01 23:47:05.038836 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-01 23:47:05.038843 | orchestrator | Sunday 01 June 2025 23:46:13 +0000 (0:00:01.730) 0:00:20.472 *********** 2025-06-01 23:47:05.038851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038937 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-01 23:47:05.038975 | orchestrator | 2025-06-01 23:47:05.038982 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 23:47:05.038990 | orchestrator | Sunday 01 June 2025 23:46:16 +0000 (0:00:03.674) 0:00:24.146 *********** 2025-06-01 23:47:05.038997 | orchestrator | 2025-06-01 23:47:05.039022 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 23:47:05.039035 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:00.373) 0:00:24.519 *********** 2025-06-01 23:47:05.039047 | orchestrator | 2025-06-01 23:47:05.039058 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 23:47:05.039070 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:00.327) 0:00:24.847 *********** 2025-06-01 23:47:05.039077 | orchestrator | 2025-06-01 23:47:05.039085 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 23:47:05.039092 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:00.325) 0:00:25.172 *********** 2025-06-01 23:47:05.039099 | orchestrator | 2025-06-01 23:47:05.039106 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 23:47:05.039113 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:00.239) 0:00:25.412 *********** 2025-06-01 23:47:05.039121 | orchestrator | 2025-06-01 23:47:05.039128 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-01 23:47:05.039135 | orchestrator | Sunday 01 June 2025 23:46:18 +0000 (0:00:00.381) 0:00:25.794 *********** 2025-06-01 23:47:05.039142 | orchestrator | 2025-06-01 23:47:05.039149 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-01 23:47:05.039156 | orchestrator | Sunday 01 June 2025 23:46:18 +0000 (0:00:00.569) 0:00:26.363 *********** 2025-06-01 23:47:05.039164 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:05.039171 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:05.039178 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:05.039185 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:05.039192 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:05.039204 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:05.039212 | orchestrator | 2025-06-01 23:47:05.039219 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-01 23:47:05.039226 | orchestrator | Sunday 01 June 2025 23:46:30 +0000 (0:00:11.681) 0:00:38.045 *********** 2025-06-01 23:47:05.039234 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:05.039241 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:05.039248 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:05.039255 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:05.039262 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:05.039269 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:05.039276 | orchestrator | 2025-06-01 23:47:05.039283 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-01 23:47:05.039291 | orchestrator | Sunday 01 June 2025 23:46:33 +0000 (0:00:02.388) 0:00:40.434 *********** 2025-06-01 23:47:05.039302 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:05.039309 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:05.039316 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:05.039323 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:05.039331 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:05.039338 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:05.039345 | orchestrator | 2025-06-01 23:47:05.039352 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-01 23:47:05.039359 | orchestrator | Sunday 01 June 2025 23:46:42 +0000 (0:00:09.386) 0:00:49.820 *********** 2025-06-01 23:47:05.039366 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-01 23:47:05.039374 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-01 23:47:05.039381 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-01 23:47:05.039388 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-01 23:47:05.039396 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-01 23:47:05.039407 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-01 23:47:05.039415 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-01 23:47:05.039423 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-01 23:47:05.039430 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-01 23:47:05.039437 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-01 23:47:05.039444 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-01 23:47:05.039451 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-01 23:47:05.039458 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 23:47:05.039465 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 23:47:05.039473 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 23:47:05.039480 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 23:47:05.039487 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 23:47:05.039499 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-01 23:47:05.039506 | orchestrator | 2025-06-01 23:47:05.039514 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-01 23:47:05.039521 | orchestrator | Sunday 01 June 2025 23:46:50 +0000 (0:00:07.950) 0:00:57.770 *********** 2025-06-01 23:47:05.039528 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-01 23:47:05.039535 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:05.039543 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-01 23:47:05.039550 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:05.039559 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-01 23:47:05.039571 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:05.039583 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-01 23:47:05.039596 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-01 23:47:05.039609 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-01 23:47:05.039623 | orchestrator | 2025-06-01 23:47:05.039637 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-01 23:47:05.039652 | orchestrator | Sunday 01 June 2025 23:46:52 +0000 (0:00:02.437) 0:01:00.207 *********** 2025-06-01 23:47:05.039666 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-01 23:47:05.039680 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:05.039689 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-01 23:47:05.039698 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:05.039711 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-01 23:47:05.039726 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:05.039740 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-01 23:47:05.039756 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-01 23:47:05.039771 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-01 23:47:05.039787 | orchestrator | 2025-06-01 23:47:05.039798 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-01 23:47:05.039807 | orchestrator | Sunday 01 June 2025 23:46:56 +0000 (0:00:03.612) 0:01:03.820 *********** 2025-06-01 23:47:05.039815 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:05.039824 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:05.039832 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:05.039846 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:05.039855 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:05.039863 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:05.039872 | orchestrator | 2025-06-01 23:47:05.039881 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:47:05.039890 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:47:05.039899 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:47:05.039908 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:47:05.039916 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:47:05.039925 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:47:05.039940 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:47:05.039955 | orchestrator | 2025-06-01 23:47:05.039964 | orchestrator | 2025-06-01 23:47:05.039973 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:47:05.039982 | orchestrator | Sunday 01 June 2025 23:47:04 +0000 (0:00:08.072) 0:01:11.893 *********** 2025-06-01 23:47:05.039991 | orchestrator | =============================================================================== 2025-06-01 23:47:05.040000 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.46s 2025-06-01 23:47:05.040040 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.68s 2025-06-01 23:47:05.040094 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.95s 2025-06-01 23:47:05.040102 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.13s 2025-06-01 23:47:05.040111 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.67s 2025-06-01 23:47:05.040120 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.61s 2025-06-01 23:47:05.040128 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.62s 2025-06-01 23:47:05.040137 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.60s 2025-06-01 23:47:05.040146 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.44s 2025-06-01 23:47:05.040154 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.39s 2025-06-01 23:47:05.040163 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.22s 2025-06-01 23:47:05.040171 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.01s 2025-06-01 23:47:05.040180 | orchestrator | module-load : Load modules ---------------------------------------------- 2.00s 2025-06-01 23:47:05.040189 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.96s 2025-06-01 23:47:05.040197 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.73s 2025-06-01 23:47:05.040206 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.13s 2025-06-01 23:47:05.040214 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2025-06-01 23:47:05.040223 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-06-01 23:47:05.040263 | orchestrator | 2025-06-01 23:47:05 | INFO  | Task 64f3934a-e816-4f92-891b-d6679f5d4f52 is in state SUCCESS 2025-06-01 23:47:05.040343 | orchestrator | 2025-06-01 23:47:05 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:05.040355 | orchestrator | 2025-06-01 23:47:05 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:05.048695 | orchestrator | 2025-06-01 23:47:05 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:05.048761 | orchestrator | 2025-06-01 23:47:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:08.099622 | orchestrator | 2025-06-01 23:47:08 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:08.099869 | orchestrator | 2025-06-01 23:47:08 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:08.100785 | orchestrator | 2025-06-01 23:47:08 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:08.101687 | orchestrator | 2025-06-01 23:47:08 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:08.102442 | orchestrator | 2025-06-01 23:47:08 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:08.102467 | orchestrator | 2025-06-01 23:47:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:11.128925 | orchestrator | 2025-06-01 23:47:11 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:11.129761 | orchestrator | 2025-06-01 23:47:11 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:11.130284 | orchestrator | 2025-06-01 23:47:11 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:11.132607 | orchestrator | 2025-06-01 23:47:11 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:11.132648 | orchestrator | 2025-06-01 23:47:11 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:11.132661 | orchestrator | 2025-06-01 23:47:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:14.172546 | orchestrator | 2025-06-01 23:47:14 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:14.174754 | orchestrator | 2025-06-01 23:47:14 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:14.177775 | orchestrator | 2025-06-01 23:47:14 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:14.177815 | orchestrator | 2025-06-01 23:47:14 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:14.178677 | orchestrator | 2025-06-01 23:47:14 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:14.178705 | orchestrator | 2025-06-01 23:47:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:17.219383 | orchestrator | 2025-06-01 23:47:17 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:17.219937 | orchestrator | 2025-06-01 23:47:17 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:17.222269 | orchestrator | 2025-06-01 23:47:17 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:17.225207 | orchestrator | 2025-06-01 23:47:17 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:17.225958 | orchestrator | 2025-06-01 23:47:17 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:17.225984 | orchestrator | 2025-06-01 23:47:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:20.278954 | orchestrator | 2025-06-01 23:47:20 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:20.291173 | orchestrator | 2025-06-01 23:47:20 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:20.295888 | orchestrator | 2025-06-01 23:47:20 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:20.297214 | orchestrator | 2025-06-01 23:47:20 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:20.301639 | orchestrator | 2025-06-01 23:47:20 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:20.301726 | orchestrator | 2025-06-01 23:47:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:23.350615 | orchestrator | 2025-06-01 23:47:23 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:23.351219 | orchestrator | 2025-06-01 23:47:23 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:23.354682 | orchestrator | 2025-06-01 23:47:23 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:23.355690 | orchestrator | 2025-06-01 23:47:23 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:23.356642 | orchestrator | 2025-06-01 23:47:23 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:23.356718 | orchestrator | 2025-06-01 23:47:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:26.408241 | orchestrator | 2025-06-01 23:47:26 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:26.410794 | orchestrator | 2025-06-01 23:47:26 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:26.414512 | orchestrator | 2025-06-01 23:47:26 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:26.418446 | orchestrator | 2025-06-01 23:47:26 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:26.421980 | orchestrator | 2025-06-01 23:47:26 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:26.422105 | orchestrator | 2025-06-01 23:47:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:29.467442 | orchestrator | 2025-06-01 23:47:29 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:29.467581 | orchestrator | 2025-06-01 23:47:29 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:29.467597 | orchestrator | 2025-06-01 23:47:29 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:29.467609 | orchestrator | 2025-06-01 23:47:29 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:29.467621 | orchestrator | 2025-06-01 23:47:29 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:29.467632 | orchestrator | 2025-06-01 23:47:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:32.492636 | orchestrator | 2025-06-01 23:47:32 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:32.492810 | orchestrator | 2025-06-01 23:47:32 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:32.492940 | orchestrator | 2025-06-01 23:47:32 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:32.493738 | orchestrator | 2025-06-01 23:47:32 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:32.494121 | orchestrator | 2025-06-01 23:47:32 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:32.494155 | orchestrator | 2025-06-01 23:47:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:35.526798 | orchestrator | 2025-06-01 23:47:35 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:35.527050 | orchestrator | 2025-06-01 23:47:35 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:35.527690 | orchestrator | 2025-06-01 23:47:35 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:35.528488 | orchestrator | 2025-06-01 23:47:35 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:35.530299 | orchestrator | 2025-06-01 23:47:35 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:35.530336 | orchestrator | 2025-06-01 23:47:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:38.571311 | orchestrator | 2025-06-01 23:47:38 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:38.573763 | orchestrator | 2025-06-01 23:47:38 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:38.575602 | orchestrator | 2025-06-01 23:47:38 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:38.578152 | orchestrator | 2025-06-01 23:47:38 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:38.581295 | orchestrator | 2025-06-01 23:47:38 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:38.581864 | orchestrator | 2025-06-01 23:47:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:41.632814 | orchestrator | 2025-06-01 23:47:41 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:41.636789 | orchestrator | 2025-06-01 23:47:41 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:41.638683 | orchestrator | 2025-06-01 23:47:41 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:41.641653 | orchestrator | 2025-06-01 23:47:41 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:41.644339 | orchestrator | 2025-06-01 23:47:41 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state STARTED 2025-06-01 23:47:41.644531 | orchestrator | 2025-06-01 23:47:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:44.707959 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task e8db727c-871f-43f4-9798-3d3b2d8e29b2 is in state STARTED 2025-06-01 23:47:44.708415 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task d4a1121d-247b-4ac9-961a-536b315ff3be is in state STARTED 2025-06-01 23:47:44.714785 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:44.720287 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:44.721485 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:44.722586 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:44.724308 | orchestrator | 2025-06-01 23:47:44.724350 | orchestrator | 2025-06-01 23:47:44 | INFO  | Task 4061f35a-c88c-4c7f-ab33-a7858ad18527 is in state SUCCESS 2025-06-01 23:47:44.728463 | orchestrator | 2025-06-01 23:47:44.728514 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-01 23:47:44.728537 | orchestrator | 2025-06-01 23:47:44.728555 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-01 23:47:44.728573 | orchestrator | Sunday 01 June 2025 23:43:16 +0000 (0:00:00.197) 0:00:00.197 *********** 2025-06-01 23:47:44.728585 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:44.728597 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:44.728608 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:44.728619 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.728629 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.728640 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.728651 | orchestrator | 2025-06-01 23:47:44.728711 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-01 23:47:44.728735 | orchestrator | Sunday 01 June 2025 23:43:17 +0000 (0:00:00.890) 0:00:01.087 *********** 2025-06-01 23:47:44.728762 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.728783 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.728800 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.728818 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.728835 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.728853 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.728870 | orchestrator | 2025-06-01 23:47:44.728889 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-01 23:47:44.728908 | orchestrator | Sunday 01 June 2025 23:43:18 +0000 (0:00:00.774) 0:00:01.862 *********** 2025-06-01 23:47:44.728927 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.728945 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.729024 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.729045 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.729062 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.729082 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.729100 | orchestrator | 2025-06-01 23:47:44.729119 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-01 23:47:44.729137 | orchestrator | Sunday 01 June 2025 23:43:19 +0000 (0:00:00.922) 0:00:02.784 *********** 2025-06-01 23:47:44.729155 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:44.729170 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:44.729181 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:44.729192 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.729203 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.729213 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.729224 | orchestrator | 2025-06-01 23:47:44.729235 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-01 23:47:44.729246 | orchestrator | Sunday 01 June 2025 23:43:21 +0000 (0:00:02.203) 0:00:04.988 *********** 2025-06-01 23:47:44.729256 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:44.729267 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:44.729277 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:44.729288 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.729298 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.729309 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.729320 | orchestrator | 2025-06-01 23:47:44.729330 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-01 23:47:44.729341 | orchestrator | Sunday 01 June 2025 23:43:22 +0000 (0:00:01.298) 0:00:06.286 *********** 2025-06-01 23:47:44.729352 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:44.729363 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:44.729373 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:44.729384 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.729394 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.729405 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.729415 | orchestrator | 2025-06-01 23:47:44.729426 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-01 23:47:44.729437 | orchestrator | Sunday 01 June 2025 23:43:23 +0000 (0:00:01.353) 0:00:07.640 *********** 2025-06-01 23:47:44.729447 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.729458 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.729468 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.729479 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.729489 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.729500 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.729511 | orchestrator | 2025-06-01 23:47:44.729522 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-01 23:47:44.729532 | orchestrator | Sunday 01 June 2025 23:43:24 +0000 (0:00:00.837) 0:00:08.478 *********** 2025-06-01 23:47:44.729543 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.729554 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.729564 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.729574 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.729585 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.729596 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.729606 | orchestrator | 2025-06-01 23:47:44.729617 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-01 23:47:44.729628 | orchestrator | Sunday 01 June 2025 23:43:25 +0000 (0:00:00.687) 0:00:09.165 *********** 2025-06-01 23:47:44.729638 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:47:44.729649 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:47:44.729660 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.729671 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:47:44.729691 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:47:44.729702 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.729724 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:47:44.729736 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:47:44.729746 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.729757 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:47:44.729783 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:47:44.729795 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.729805 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:47:44.729816 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:47:44.729827 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.729837 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-01 23:47:44.729848 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-01 23:47:44.729859 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.729869 | orchestrator | 2025-06-01 23:47:44.729880 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-01 23:47:44.729891 | orchestrator | Sunday 01 June 2025 23:43:26 +0000 (0:00:01.035) 0:00:10.200 *********** 2025-06-01 23:47:44.729902 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.729913 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.729923 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.729934 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.729945 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.729955 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.729966 | orchestrator | 2025-06-01 23:47:44.729977 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-01 23:47:44.729989 | orchestrator | Sunday 01 June 2025 23:43:28 +0000 (0:00:01.604) 0:00:11.804 *********** 2025-06-01 23:47:44.730077 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:44.730091 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:44.730101 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:44.730112 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.730123 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.730133 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.730144 | orchestrator | 2025-06-01 23:47:44.730155 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-01 23:47:44.730166 | orchestrator | Sunday 01 June 2025 23:43:28 +0000 (0:00:00.615) 0:00:12.419 *********** 2025-06-01 23:47:44.730176 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:44.730187 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.730198 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.730209 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.730220 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:44.730230 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:44.730241 | orchestrator | 2025-06-01 23:47:44.730252 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-01 23:47:44.730263 | orchestrator | Sunday 01 June 2025 23:43:34 +0000 (0:00:06.021) 0:00:18.441 *********** 2025-06-01 23:47:44.730274 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.730284 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.730295 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.730306 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.730317 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.730327 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.730338 | orchestrator | 2025-06-01 23:47:44.730349 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-01 23:47:44.730368 | orchestrator | Sunday 01 June 2025 23:43:36 +0000 (0:00:01.605) 0:00:20.046 *********** 2025-06-01 23:47:44.730379 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.730390 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.730400 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.730411 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.730421 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.730432 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.730443 | orchestrator | 2025-06-01 23:47:44.730454 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-01 23:47:44.730467 | orchestrator | Sunday 01 June 2025 23:43:37 +0000 (0:00:01.631) 0:00:21.677 *********** 2025-06-01 23:47:44.730478 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.730488 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.730499 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.730509 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.730520 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.730530 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.730541 | orchestrator | 2025-06-01 23:47:44.730552 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-01 23:47:44.730563 | orchestrator | Sunday 01 June 2025 23:43:39 +0000 (0:00:01.087) 0:00:22.765 *********** 2025-06-01 23:47:44.730574 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-01 23:47:44.730585 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-01 23:47:44.730596 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.730606 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-01 23:47:44.730617 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-01 23:47:44.730628 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.730638 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-01 23:47:44.730649 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-01 23:47:44.730659 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.730670 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-01 23:47:44.730681 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-01 23:47:44.730691 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.730702 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-01 23:47:44.730713 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-01 23:47:44.730729 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.730740 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-01 23:47:44.730751 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-01 23:47:44.730761 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.730772 | orchestrator | 2025-06-01 23:47:44.730783 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-01 23:47:44.730800 | orchestrator | Sunday 01 June 2025 23:43:40 +0000 (0:00:01.485) 0:00:24.250 *********** 2025-06-01 23:47:44.730811 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.730822 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.730832 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.730843 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.730854 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.730865 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.730875 | orchestrator | 2025-06-01 23:47:44.730886 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-01 23:47:44.730897 | orchestrator | 2025-06-01 23:47:44.730908 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-01 23:47:44.730920 | orchestrator | Sunday 01 June 2025 23:43:41 +0000 (0:00:01.132) 0:00:25.383 *********** 2025-06-01 23:47:44.730931 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.730948 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.730959 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.730970 | orchestrator | 2025-06-01 23:47:44.730981 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-01 23:47:44.730992 | orchestrator | Sunday 01 June 2025 23:43:42 +0000 (0:00:01.371) 0:00:26.755 *********** 2025-06-01 23:47:44.731164 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.731200 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.731211 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.731222 | orchestrator | 2025-06-01 23:47:44.731231 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-01 23:47:44.731241 | orchestrator | Sunday 01 June 2025 23:43:44 +0000 (0:00:01.451) 0:00:28.206 *********** 2025-06-01 23:47:44.731251 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.731260 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.731269 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.731279 | orchestrator | 2025-06-01 23:47:44.731288 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-01 23:47:44.731298 | orchestrator | Sunday 01 June 2025 23:43:45 +0000 (0:00:01.430) 0:00:29.637 *********** 2025-06-01 23:47:44.731307 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.731317 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.731326 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.731335 | orchestrator | 2025-06-01 23:47:44.731345 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-01 23:47:44.731355 | orchestrator | Sunday 01 June 2025 23:43:46 +0000 (0:00:00.873) 0:00:30.510 *********** 2025-06-01 23:47:44.731364 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.731374 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.731383 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.731393 | orchestrator | 2025-06-01 23:47:44.731402 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-01 23:47:44.731412 | orchestrator | Sunday 01 June 2025 23:43:47 +0000 (0:00:00.328) 0:00:30.839 *********** 2025-06-01 23:47:44.731422 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:47:44.731431 | orchestrator | 2025-06-01 23:47:44.731441 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-01 23:47:44.731450 | orchestrator | Sunday 01 June 2025 23:43:47 +0000 (0:00:00.745) 0:00:31.584 *********** 2025-06-01 23:47:44.731460 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.731469 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.731479 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.731488 | orchestrator | 2025-06-01 23:47:44.731498 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-01 23:47:44.731507 | orchestrator | Sunday 01 June 2025 23:43:50 +0000 (0:00:02.342) 0:00:33.927 *********** 2025-06-01 23:47:44.731516 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.731526 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.731535 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.731545 | orchestrator | 2025-06-01 23:47:44.731554 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-01 23:47:44.731564 | orchestrator | Sunday 01 June 2025 23:43:51 +0000 (0:00:00.870) 0:00:34.798 *********** 2025-06-01 23:47:44.731573 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.731583 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.731592 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.731602 | orchestrator | 2025-06-01 23:47:44.731611 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-01 23:47:44.731621 | orchestrator | Sunday 01 June 2025 23:43:52 +0000 (0:00:01.015) 0:00:35.813 *********** 2025-06-01 23:47:44.731631 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.731640 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.731649 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.731659 | orchestrator | 2025-06-01 23:47:44.731679 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-01 23:47:44.731688 | orchestrator | Sunday 01 June 2025 23:43:54 +0000 (0:00:02.602) 0:00:38.415 *********** 2025-06-01 23:47:44.731698 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.731707 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.731717 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.731726 | orchestrator | 2025-06-01 23:47:44.731736 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-01 23:47:44.731745 | orchestrator | Sunday 01 June 2025 23:43:55 +0000 (0:00:00.621) 0:00:39.037 *********** 2025-06-01 23:47:44.731755 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.731764 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.731774 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.731783 | orchestrator | 2025-06-01 23:47:44.731793 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-01 23:47:44.731802 | orchestrator | Sunday 01 June 2025 23:43:55 +0000 (0:00:00.493) 0:00:39.530 *********** 2025-06-01 23:47:44.731812 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.731828 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.731838 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.731847 | orchestrator | 2025-06-01 23:47:44.731857 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-01 23:47:44.731867 | orchestrator | Sunday 01 June 2025 23:43:58 +0000 (0:00:02.322) 0:00:41.853 *********** 2025-06-01 23:47:44.731889 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 23:47:44.731900 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 23:47:44.731910 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-01 23:47:44.731919 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 23:47:44.731929 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 23:47:44.731939 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-01 23:47:44.731949 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 23:47:44.731959 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 23:47:44.731968 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-01 23:47:44.731978 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 23:47:44.731987 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 23:47:44.731997 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-01 23:47:44.732052 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 23:47:44.732069 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-01 23:47:44.732087 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.732098 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.732114 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.732124 | orchestrator | 2025-06-01 23:47:44.732134 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-01 23:47:44.732144 | orchestrator | Sunday 01 June 2025 23:44:53 +0000 (0:00:55.631) 0:01:37.484 *********** 2025-06-01 23:47:44.732153 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.732163 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.732173 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.732182 | orchestrator | 2025-06-01 23:47:44.732192 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-01 23:47:44.732202 | orchestrator | Sunday 01 June 2025 23:44:54 +0000 (0:00:00.429) 0:01:37.914 *********** 2025-06-01 23:47:44.732211 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732221 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732230 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732240 | orchestrator | 2025-06-01 23:47:44.732249 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-01 23:47:44.732259 | orchestrator | Sunday 01 June 2025 23:44:55 +0000 (0:00:01.069) 0:01:38.984 *********** 2025-06-01 23:47:44.732269 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732278 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732288 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732297 | orchestrator | 2025-06-01 23:47:44.732307 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-01 23:47:44.732316 | orchestrator | Sunday 01 June 2025 23:44:56 +0000 (0:00:01.552) 0:01:40.536 *********** 2025-06-01 23:47:44.732326 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732335 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732345 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732354 | orchestrator | 2025-06-01 23:47:44.732364 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-01 23:47:44.732374 | orchestrator | Sunday 01 June 2025 23:45:11 +0000 (0:00:14.954) 0:01:55.491 *********** 2025-06-01 23:47:44.732383 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.732393 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.732403 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.732412 | orchestrator | 2025-06-01 23:47:44.732422 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-01 23:47:44.732432 | orchestrator | Sunday 01 June 2025 23:45:12 +0000 (0:00:00.747) 0:01:56.238 *********** 2025-06-01 23:47:44.732441 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.732451 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.732460 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.732470 | orchestrator | 2025-06-01 23:47:44.732480 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-01 23:47:44.732495 | orchestrator | Sunday 01 June 2025 23:45:13 +0000 (0:00:00.671) 0:01:56.909 *********** 2025-06-01 23:47:44.732505 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732514 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732524 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732533 | orchestrator | 2025-06-01 23:47:44.732543 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-01 23:47:44.732559 | orchestrator | Sunday 01 June 2025 23:45:13 +0000 (0:00:00.585) 0:01:57.495 *********** 2025-06-01 23:47:44.732570 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.732579 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.732589 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.732598 | orchestrator | 2025-06-01 23:47:44.732609 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-01 23:47:44.732620 | orchestrator | Sunday 01 June 2025 23:45:14 +0000 (0:00:00.835) 0:01:58.330 *********** 2025-06-01 23:47:44.732631 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.732642 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.732653 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.732663 | orchestrator | 2025-06-01 23:47:44.732681 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-01 23:47:44.732692 | orchestrator | Sunday 01 June 2025 23:45:14 +0000 (0:00:00.279) 0:01:58.609 *********** 2025-06-01 23:47:44.732703 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732713 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732724 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732735 | orchestrator | 2025-06-01 23:47:44.732746 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-01 23:47:44.732756 | orchestrator | Sunday 01 June 2025 23:45:15 +0000 (0:00:00.602) 0:01:59.212 *********** 2025-06-01 23:47:44.732767 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732778 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732789 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732800 | orchestrator | 2025-06-01 23:47:44.732811 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-01 23:47:44.732822 | orchestrator | Sunday 01 June 2025 23:45:16 +0000 (0:00:00.598) 0:01:59.810 *********** 2025-06-01 23:47:44.732833 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732844 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732855 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732866 | orchestrator | 2025-06-01 23:47:44.732876 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-01 23:47:44.732887 | orchestrator | Sunday 01 June 2025 23:45:17 +0000 (0:00:01.131) 0:02:00.942 *********** 2025-06-01 23:47:44.732898 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:47:44.732909 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:47:44.732920 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:47:44.732931 | orchestrator | 2025-06-01 23:47:44.732942 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-01 23:47:44.732953 | orchestrator | Sunday 01 June 2025 23:45:17 +0000 (0:00:00.731) 0:02:01.674 *********** 2025-06-01 23:47:44.732963 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.732974 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.732985 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.732996 | orchestrator | 2025-06-01 23:47:44.733031 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-01 23:47:44.733043 | orchestrator | Sunday 01 June 2025 23:45:18 +0000 (0:00:00.270) 0:02:01.944 *********** 2025-06-01 23:47:44.733054 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.733065 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.733075 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.733086 | orchestrator | 2025-06-01 23:47:44.733097 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-01 23:47:44.733108 | orchestrator | Sunday 01 June 2025 23:45:18 +0000 (0:00:00.277) 0:02:02.222 *********** 2025-06-01 23:47:44.733119 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.733130 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.733141 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.733151 | orchestrator | 2025-06-01 23:47:44.733162 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-01 23:47:44.733173 | orchestrator | Sunday 01 June 2025 23:45:19 +0000 (0:00:00.867) 0:02:03.089 *********** 2025-06-01 23:47:44.733184 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.733195 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.733206 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.733217 | orchestrator | 2025-06-01 23:47:44.733228 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-01 23:47:44.733239 | orchestrator | Sunday 01 June 2025 23:45:19 +0000 (0:00:00.591) 0:02:03.680 *********** 2025-06-01 23:47:44.733250 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 23:47:44.733261 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 23:47:44.733279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-01 23:47:44.733290 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 23:47:44.733301 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 23:47:44.733312 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-01 23:47:44.733323 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 23:47:44.733334 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 23:47:44.733345 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-01 23:47:44.733356 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 23:47:44.733374 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 23:47:44.733392 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-01 23:47:44.733408 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 23:47:44.733426 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 23:47:44.733437 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-01 23:47:44.733448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 23:47:44.733459 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 23:47:44.733470 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-01 23:47:44.733480 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-01 23:47:44.733491 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-01 23:47:44.733502 | orchestrator | 2025-06-01 23:47:44.733513 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-01 23:47:44.733524 | orchestrator | 2025-06-01 23:47:44.733535 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-01 23:47:44.733546 | orchestrator | Sunday 01 June 2025 23:45:23 +0000 (0:00:03.173) 0:02:06.854 *********** 2025-06-01 23:47:44.733557 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:44.733568 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:44.733579 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:44.733589 | orchestrator | 2025-06-01 23:47:44.733600 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-01 23:47:44.733611 | orchestrator | Sunday 01 June 2025 23:45:23 +0000 (0:00:00.593) 0:02:07.447 *********** 2025-06-01 23:47:44.733622 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:44.733632 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:44.733643 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:44.733654 | orchestrator | 2025-06-01 23:47:44.733665 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-01 23:47:44.733676 | orchestrator | Sunday 01 June 2025 23:45:24 +0000 (0:00:00.594) 0:02:08.042 *********** 2025-06-01 23:47:44.733686 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:44.733697 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:44.733708 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:44.733719 | orchestrator | 2025-06-01 23:47:44.733730 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-01 23:47:44.733741 | orchestrator | Sunday 01 June 2025 23:45:24 +0000 (0:00:00.334) 0:02:08.377 *********** 2025-06-01 23:47:44.733751 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:47:44.733776 | orchestrator | 2025-06-01 23:47:44.733787 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-01 23:47:44.733798 | orchestrator | Sunday 01 June 2025 23:45:25 +0000 (0:00:00.706) 0:02:09.083 *********** 2025-06-01 23:47:44.733809 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.733820 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.733831 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.733842 | orchestrator | 2025-06-01 23:47:44.733853 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-01 23:47:44.733863 | orchestrator | Sunday 01 June 2025 23:45:25 +0000 (0:00:00.312) 0:02:09.396 *********** 2025-06-01 23:47:44.733874 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.733885 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.733896 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.733907 | orchestrator | 2025-06-01 23:47:44.733917 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-01 23:47:44.733928 | orchestrator | Sunday 01 June 2025 23:45:25 +0000 (0:00:00.293) 0:02:09.690 *********** 2025-06-01 23:47:44.733939 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.733950 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.733960 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.733971 | orchestrator | 2025-06-01 23:47:44.733982 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-01 23:47:44.733993 | orchestrator | Sunday 01 June 2025 23:45:26 +0000 (0:00:00.307) 0:02:09.997 *********** 2025-06-01 23:47:44.734063 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:44.734076 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:44.734087 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:44.734098 | orchestrator | 2025-06-01 23:47:44.734109 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-01 23:47:44.734119 | orchestrator | Sunday 01 June 2025 23:45:27 +0000 (0:00:01.404) 0:02:11.402 *********** 2025-06-01 23:47:44.734130 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:47:44.734141 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:47:44.734152 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:47:44.734162 | orchestrator | 2025-06-01 23:47:44.734173 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-01 23:47:44.734184 | orchestrator | 2025-06-01 23:47:44.734195 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-01 23:47:44.734206 | orchestrator | Sunday 01 June 2025 23:45:36 +0000 (0:00:08.731) 0:02:20.134 *********** 2025-06-01 23:47:44.734216 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.734227 | orchestrator | 2025-06-01 23:47:44.734238 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-01 23:47:44.734249 | orchestrator | Sunday 01 June 2025 23:45:37 +0000 (0:00:00.765) 0:02:20.899 *********** 2025-06-01 23:47:44.734260 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.734271 | orchestrator | 2025-06-01 23:47:44.734282 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 23:47:44.734293 | orchestrator | Sunday 01 June 2025 23:45:37 +0000 (0:00:00.435) 0:02:21.335 *********** 2025-06-01 23:47:44.734304 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 23:47:44.734314 | orchestrator | 2025-06-01 23:47:44.734325 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 23:47:44.734336 | orchestrator | Sunday 01 June 2025 23:45:38 +0000 (0:00:00.992) 0:02:22.327 *********** 2025-06-01 23:47:44.734347 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.734358 | orchestrator | 2025-06-01 23:47:44.734376 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-01 23:47:44.734388 | orchestrator | Sunday 01 June 2025 23:45:39 +0000 (0:00:00.833) 0:02:23.160 *********** 2025-06-01 23:47:44.734399 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.734410 | orchestrator | 2025-06-01 23:47:44.734428 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-01 23:47:44.734439 | orchestrator | Sunday 01 June 2025 23:45:39 +0000 (0:00:00.581) 0:02:23.742 *********** 2025-06-01 23:47:44.734450 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 23:47:44.734461 | orchestrator | 2025-06-01 23:47:44.734472 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-01 23:47:44.734483 | orchestrator | Sunday 01 June 2025 23:45:41 +0000 (0:00:01.637) 0:02:25.379 *********** 2025-06-01 23:47:44.734494 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 23:47:44.734505 | orchestrator | 2025-06-01 23:47:44.734516 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-01 23:47:44.734527 | orchestrator | Sunday 01 June 2025 23:45:42 +0000 (0:00:00.892) 0:02:26.272 *********** 2025-06-01 23:47:44.734538 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.734548 | orchestrator | 2025-06-01 23:47:44.734559 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-01 23:47:44.734570 | orchestrator | Sunday 01 June 2025 23:45:42 +0000 (0:00:00.402) 0:02:26.674 *********** 2025-06-01 23:47:44.734581 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.734592 | orchestrator | 2025-06-01 23:47:44.734603 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-01 23:47:44.734614 | orchestrator | 2025-06-01 23:47:44.734625 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-01 23:47:44.734635 | orchestrator | Sunday 01 June 2025 23:45:43 +0000 (0:00:00.406) 0:02:27.081 *********** 2025-06-01 23:47:44.734646 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.734657 | orchestrator | 2025-06-01 23:47:44.734668 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-01 23:47:44.734679 | orchestrator | Sunday 01 June 2025 23:45:43 +0000 (0:00:00.135) 0:02:27.216 *********** 2025-06-01 23:47:44.734689 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 23:47:44.734700 | orchestrator | 2025-06-01 23:47:44.735451 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-01 23:47:44.735477 | orchestrator | Sunday 01 June 2025 23:45:43 +0000 (0:00:00.196) 0:02:27.413 *********** 2025-06-01 23:47:44.735491 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.735503 | orchestrator | 2025-06-01 23:47:44.735516 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-01 23:47:44.735530 | orchestrator | Sunday 01 June 2025 23:45:44 +0000 (0:00:01.230) 0:02:28.643 *********** 2025-06-01 23:47:44.735542 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.735554 | orchestrator | 2025-06-01 23:47:44.735567 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-01 23:47:44.735579 | orchestrator | Sunday 01 June 2025 23:45:46 +0000 (0:00:01.728) 0:02:30.372 *********** 2025-06-01 23:47:44.735590 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.735601 | orchestrator | 2025-06-01 23:47:44.735613 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-01 23:47:44.735624 | orchestrator | Sunday 01 June 2025 23:45:47 +0000 (0:00:00.893) 0:02:31.265 *********** 2025-06-01 23:47:44.735635 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.735645 | orchestrator | 2025-06-01 23:47:44.735656 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-01 23:47:44.735667 | orchestrator | Sunday 01 June 2025 23:45:47 +0000 (0:00:00.419) 0:02:31.684 *********** 2025-06-01 23:47:44.735678 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.735688 | orchestrator | 2025-06-01 23:47:44.735699 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-01 23:47:44.735710 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:06.758) 0:02:38.443 *********** 2025-06-01 23:47:44.735721 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.735731 | orchestrator | 2025-06-01 23:47:44.735743 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-01 23:47:44.735764 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:10.710) 0:02:49.153 *********** 2025-06-01 23:47:44.735775 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.735786 | orchestrator | 2025-06-01 23:47:44.735798 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-01 23:47:44.735809 | orchestrator | 2025-06-01 23:47:44.735819 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-01 23:47:44.735829 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:00.534) 0:02:49.688 *********** 2025-06-01 23:47:44.735838 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.735848 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.735858 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.735868 | orchestrator | 2025-06-01 23:47:44.735877 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-01 23:47:44.735887 | orchestrator | Sunday 01 June 2025 23:46:06 +0000 (0:00:00.621) 0:02:50.309 *********** 2025-06-01 23:47:44.735897 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.735906 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.735916 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.735926 | orchestrator | 2025-06-01 23:47:44.735935 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-01 23:47:44.735950 | orchestrator | Sunday 01 June 2025 23:46:06 +0000 (0:00:00.371) 0:02:50.681 *********** 2025-06-01 23:47:44.735961 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:47:44.735970 | orchestrator | 2025-06-01 23:47:44.735980 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-01 23:47:44.735990 | orchestrator | Sunday 01 June 2025 23:46:07 +0000 (0:00:00.590) 0:02:51.271 *********** 2025-06-01 23:47:44.736014 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736024 | orchestrator | 2025-06-01 23:47:44.736044 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-01 23:47:44.736054 | orchestrator | Sunday 01 June 2025 23:46:08 +0000 (0:00:01.073) 0:02:52.345 *********** 2025-06-01 23:47:44.736064 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736074 | orchestrator | 2025-06-01 23:47:44.736084 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-01 23:47:44.736094 | orchestrator | Sunday 01 June 2025 23:46:09 +0000 (0:00:01.105) 0:02:53.451 *********** 2025-06-01 23:47:44.736103 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736113 | orchestrator | 2025-06-01 23:47:44.736139 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-01 23:47:44.736158 | orchestrator | Sunday 01 June 2025 23:46:10 +0000 (0:00:00.866) 0:02:54.318 *********** 2025-06-01 23:47:44.736168 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736178 | orchestrator | 2025-06-01 23:47:44.736188 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-01 23:47:44.736198 | orchestrator | Sunday 01 June 2025 23:46:11 +0000 (0:00:01.205) 0:02:55.524 *********** 2025-06-01 23:47:44.736207 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736217 | orchestrator | 2025-06-01 23:47:44.736226 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-01 23:47:44.736236 | orchestrator | Sunday 01 June 2025 23:46:11 +0000 (0:00:00.188) 0:02:55.712 *********** 2025-06-01 23:47:44.736246 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736255 | orchestrator | 2025-06-01 23:47:44.736265 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-01 23:47:44.736275 | orchestrator | Sunday 01 June 2025 23:46:12 +0000 (0:00:00.224) 0:02:55.937 *********** 2025-06-01 23:47:44.736284 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736294 | orchestrator | 2025-06-01 23:47:44.736304 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-01 23:47:44.736313 | orchestrator | Sunday 01 June 2025 23:46:12 +0000 (0:00:00.228) 0:02:56.166 *********** 2025-06-01 23:47:44.736330 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736339 | orchestrator | 2025-06-01 23:47:44.736349 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-01 23:47:44.736359 | orchestrator | Sunday 01 June 2025 23:46:12 +0000 (0:00:00.205) 0:02:56.371 *********** 2025-06-01 23:47:44.736368 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736378 | orchestrator | 2025-06-01 23:47:44.736387 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-01 23:47:44.736397 | orchestrator | Sunday 01 June 2025 23:46:19 +0000 (0:00:06.394) 0:03:02.765 *********** 2025-06-01 23:47:44.736406 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-01 23:47:44.736416 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-01 23:47:44.736426 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-01 23:47:44.736436 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-01 23:47:44.736446 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-01 23:47:44.736455 | orchestrator | 2025-06-01 23:47:44.736465 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-01 23:47:44.736474 | orchestrator | Sunday 01 June 2025 23:47:11 +0000 (0:00:52.700) 0:03:55.466 *********** 2025-06-01 23:47:44.736484 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736493 | orchestrator | 2025-06-01 23:47:44.736503 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-01 23:47:44.736513 | orchestrator | Sunday 01 June 2025 23:47:12 +0000 (0:00:01.162) 0:03:56.628 *********** 2025-06-01 23:47:44.736522 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736532 | orchestrator | 2025-06-01 23:47:44.736541 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-01 23:47:44.736551 | orchestrator | Sunday 01 June 2025 23:47:14 +0000 (0:00:01.824) 0:03:58.452 *********** 2025-06-01 23:47:44.736561 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 23:47:44.736570 | orchestrator | 2025-06-01 23:47:44.736580 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-01 23:47:44.736590 | orchestrator | Sunday 01 June 2025 23:47:16 +0000 (0:00:01.307) 0:03:59.760 *********** 2025-06-01 23:47:44.736599 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736609 | orchestrator | 2025-06-01 23:47:44.736618 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-01 23:47:44.736628 | orchestrator | Sunday 01 June 2025 23:47:16 +0000 (0:00:00.197) 0:03:59.957 *********** 2025-06-01 23:47:44.736637 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-01 23:47:44.736647 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-01 23:47:44.736657 | orchestrator | 2025-06-01 23:47:44.736666 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-01 23:47:44.736676 | orchestrator | Sunday 01 June 2025 23:47:19 +0000 (0:00:02.967) 0:04:02.925 *********** 2025-06-01 23:47:44.736686 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.736696 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.736705 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.736715 | orchestrator | 2025-06-01 23:47:44.736729 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-01 23:47:44.736739 | orchestrator | Sunday 01 June 2025 23:47:19 +0000 (0:00:00.382) 0:04:03.307 *********** 2025-06-01 23:47:44.736749 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.736758 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.736768 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.736778 | orchestrator | 2025-06-01 23:47:44.736787 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-01 23:47:44.736797 | orchestrator | 2025-06-01 23:47:44.736813 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-01 23:47:44.736828 | orchestrator | Sunday 01 June 2025 23:47:20 +0000 (0:00:00.987) 0:04:04.295 *********** 2025-06-01 23:47:44.736839 | orchestrator | ok: [testbed-manager] 2025-06-01 23:47:44.736849 | orchestrator | 2025-06-01 23:47:44.736858 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-01 23:47:44.736868 | orchestrator | Sunday 01 June 2025 23:47:20 +0000 (0:00:00.137) 0:04:04.432 *********** 2025-06-01 23:47:44.736878 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-01 23:47:44.736888 | orchestrator | 2025-06-01 23:47:44.736898 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-01 23:47:44.736907 | orchestrator | Sunday 01 June 2025 23:47:21 +0000 (0:00:00.412) 0:04:04.844 *********** 2025-06-01 23:47:44.736917 | orchestrator | changed: [testbed-manager] 2025-06-01 23:47:44.736927 | orchestrator | 2025-06-01 23:47:44.736937 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-01 23:47:44.736946 | orchestrator | 2025-06-01 23:47:44.736956 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-01 23:47:44.736966 | orchestrator | Sunday 01 June 2025 23:47:27 +0000 (0:00:06.018) 0:04:10.863 *********** 2025-06-01 23:47:44.736976 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:47:44.736985 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:47:44.736995 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:47:44.737020 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:47:44.737030 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:47:44.737040 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:47:44.737050 | orchestrator | 2025-06-01 23:47:44.737060 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-01 23:47:44.737070 | orchestrator | Sunday 01 June 2025 23:47:27 +0000 (0:00:00.759) 0:04:11.623 *********** 2025-06-01 23:47:44.737080 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 23:47:44.737090 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 23:47:44.737099 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-01 23:47:44.737109 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 23:47:44.737119 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 23:47:44.737129 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-01 23:47:44.737138 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 23:47:44.737148 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 23:47:44.737158 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-01 23:47:44.737168 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 23:47:44.737177 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 23:47:44.737187 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 23:47:44.737197 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 23:47:44.737206 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-01 23:47:44.737216 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-01 23:47:44.737226 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 23:47:44.737236 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 23:47:44.737245 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-01 23:47:44.737262 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 23:47:44.737272 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 23:47:44.737281 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-01 23:47:44.737291 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 23:47:44.737301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 23:47:44.737311 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-01 23:47:44.737320 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 23:47:44.737330 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 23:47:44.737340 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-01 23:47:44.737354 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 23:47:44.737364 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 23:47:44.737374 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-01 23:47:44.737384 | orchestrator | 2025-06-01 23:47:44.737394 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-01 23:47:44.737404 | orchestrator | Sunday 01 June 2025 23:47:40 +0000 (0:00:12.735) 0:04:24.358 *********** 2025-06-01 23:47:44.737419 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.737429 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.737439 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.737449 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.737459 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.737469 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.737478 | orchestrator | 2025-06-01 23:47:44.737488 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-01 23:47:44.737498 | orchestrator | Sunday 01 June 2025 23:47:41 +0000 (0:00:00.551) 0:04:24.910 *********** 2025-06-01 23:47:44.737508 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:47:44.737518 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:47:44.737527 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:47:44.737537 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:47:44.737547 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:47:44.737556 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:47:44.737566 | orchestrator | 2025-06-01 23:47:44.737576 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:47:44.737586 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:47:44.737597 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-01 23:47:44.737607 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-01 23:47:44.737618 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-01 23:47:44.737627 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 23:47:44.737637 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 23:47:44.737647 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-01 23:47:44.737663 | orchestrator | 2025-06-01 23:47:44.737672 | orchestrator | 2025-06-01 23:47:44.737682 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:47:44.737692 | orchestrator | Sunday 01 June 2025 23:47:41 +0000 (0:00:00.671) 0:04:25.581 *********** 2025-06-01 23:47:44.737702 | orchestrator | =============================================================================== 2025-06-01 23:47:44.737712 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.63s 2025-06-01 23:47:44.737722 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 52.70s 2025-06-01 23:47:44.737731 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.95s 2025-06-01 23:47:44.737741 | orchestrator | Manage labels ---------------------------------------------------------- 12.74s 2025-06-01 23:47:44.737751 | orchestrator | kubectl : Install required packages ------------------------------------ 10.71s 2025-06-01 23:47:44.737761 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.73s 2025-06-01 23:47:44.737770 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.76s 2025-06-01 23:47:44.737788 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.39s 2025-06-01 23:47:44.737804 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.02s 2025-06-01 23:47:44.737820 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.02s 2025-06-01 23:47:44.737836 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.17s 2025-06-01 23:47:44.737852 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.97s 2025-06-01 23:47:44.737869 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.60s 2025-06-01 23:47:44.737886 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.34s 2025-06-01 23:47:44.737899 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.32s 2025-06-01 23:47:44.737909 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.20s 2025-06-01 23:47:44.737918 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.82s 2025-06-01 23:47:44.737928 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.73s 2025-06-01 23:47:44.737937 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.64s 2025-06-01 23:47:44.737952 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.63s 2025-06-01 23:47:44.737962 | orchestrator | 2025-06-01 23:47:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:47.771295 | orchestrator | 2025-06-01 23:47:47 | INFO  | Task e8db727c-871f-43f4-9798-3d3b2d8e29b2 is in state STARTED 2025-06-01 23:47:47.771815 | orchestrator | 2025-06-01 23:47:47 | INFO  | Task d4a1121d-247b-4ac9-961a-536b315ff3be is in state STARTED 2025-06-01 23:47:47.772304 | orchestrator | 2025-06-01 23:47:47 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:47.773518 | orchestrator | 2025-06-01 23:47:47 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:47.774867 | orchestrator | 2025-06-01 23:47:47 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:47.775883 | orchestrator | 2025-06-01 23:47:47 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:47.776482 | orchestrator | 2025-06-01 23:47:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:50.817773 | orchestrator | 2025-06-01 23:47:50 | INFO  | Task e8db727c-871f-43f4-9798-3d3b2d8e29b2 is in state SUCCESS 2025-06-01 23:47:50.818508 | orchestrator | 2025-06-01 23:47:50 | INFO  | Task d4a1121d-247b-4ac9-961a-536b315ff3be is in state STARTED 2025-06-01 23:47:50.821371 | orchestrator | 2025-06-01 23:47:50 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:50.823772 | orchestrator | 2025-06-01 23:47:50 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:50.824682 | orchestrator | 2025-06-01 23:47:50 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:50.826566 | orchestrator | 2025-06-01 23:47:50 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:50.826638 | orchestrator | 2025-06-01 23:47:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:53.877497 | orchestrator | 2025-06-01 23:47:53 | INFO  | Task d4a1121d-247b-4ac9-961a-536b315ff3be is in state STARTED 2025-06-01 23:47:53.878173 | orchestrator | 2025-06-01 23:47:53 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:53.879346 | orchestrator | 2025-06-01 23:47:53 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:53.880469 | orchestrator | 2025-06-01 23:47:53 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:53.881739 | orchestrator | 2025-06-01 23:47:53 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:53.881786 | orchestrator | 2025-06-01 23:47:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:56.924806 | orchestrator | 2025-06-01 23:47:56 | INFO  | Task d4a1121d-247b-4ac9-961a-536b315ff3be is in state SUCCESS 2025-06-01 23:47:56.926465 | orchestrator | 2025-06-01 23:47:56 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:56.929072 | orchestrator | 2025-06-01 23:47:56 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:56.930301 | orchestrator | 2025-06-01 23:47:56 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:56.931678 | orchestrator | 2025-06-01 23:47:56 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:56.931718 | orchestrator | 2025-06-01 23:47:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:47:59.981171 | orchestrator | 2025-06-01 23:47:59 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:47:59.982932 | orchestrator | 2025-06-01 23:47:59 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:47:59.985573 | orchestrator | 2025-06-01 23:47:59 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:47:59.987380 | orchestrator | 2025-06-01 23:47:59 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:47:59.987522 | orchestrator | 2025-06-01 23:47:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:03.034188 | orchestrator | 2025-06-01 23:48:03 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:03.036608 | orchestrator | 2025-06-01 23:48:03 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:03.038816 | orchestrator | 2025-06-01 23:48:03 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:03.041144 | orchestrator | 2025-06-01 23:48:03 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:03.041185 | orchestrator | 2025-06-01 23:48:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:06.083327 | orchestrator | 2025-06-01 23:48:06 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:06.084246 | orchestrator | 2025-06-01 23:48:06 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:06.086955 | orchestrator | 2025-06-01 23:48:06 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:06.089377 | orchestrator | 2025-06-01 23:48:06 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:06.089721 | orchestrator | 2025-06-01 23:48:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:09.134522 | orchestrator | 2025-06-01 23:48:09 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:09.134622 | orchestrator | 2025-06-01 23:48:09 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:09.135753 | orchestrator | 2025-06-01 23:48:09 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:09.136945 | orchestrator | 2025-06-01 23:48:09 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:09.136988 | orchestrator | 2025-06-01 23:48:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:12.191461 | orchestrator | 2025-06-01 23:48:12 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:12.192371 | orchestrator | 2025-06-01 23:48:12 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:12.193813 | orchestrator | 2025-06-01 23:48:12 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:12.195283 | orchestrator | 2025-06-01 23:48:12 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:12.195413 | orchestrator | 2025-06-01 23:48:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:15.252384 | orchestrator | 2025-06-01 23:48:15 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:15.256357 | orchestrator | 2025-06-01 23:48:15 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:15.258938 | orchestrator | 2025-06-01 23:48:15 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:15.260224 | orchestrator | 2025-06-01 23:48:15 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:15.260259 | orchestrator | 2025-06-01 23:48:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:18.313864 | orchestrator | 2025-06-01 23:48:18 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:18.315614 | orchestrator | 2025-06-01 23:48:18 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:18.317611 | orchestrator | 2025-06-01 23:48:18 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:18.319364 | orchestrator | 2025-06-01 23:48:18 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:18.319516 | orchestrator | 2025-06-01 23:48:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:21.369533 | orchestrator | 2025-06-01 23:48:21 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:21.370739 | orchestrator | 2025-06-01 23:48:21 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:21.372915 | orchestrator | 2025-06-01 23:48:21 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:21.374667 | orchestrator | 2025-06-01 23:48:21 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:21.374831 | orchestrator | 2025-06-01 23:48:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:24.426151 | orchestrator | 2025-06-01 23:48:24 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:24.428607 | orchestrator | 2025-06-01 23:48:24 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:24.431538 | orchestrator | 2025-06-01 23:48:24 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:24.431825 | orchestrator | 2025-06-01 23:48:24 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:24.432142 | orchestrator | 2025-06-01 23:48:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:27.468099 | orchestrator | 2025-06-01 23:48:27 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state STARTED 2025-06-01 23:48:27.468236 | orchestrator | 2025-06-01 23:48:27 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:27.471591 | orchestrator | 2025-06-01 23:48:27 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:27.472339 | orchestrator | 2025-06-01 23:48:27 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:27.472392 | orchestrator | 2025-06-01 23:48:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:30.519464 | orchestrator | 2025-06-01 23:48:30.519589 | orchestrator | 2025-06-01 23:48:30.519602 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-01 23:48:30.519611 | orchestrator | 2025-06-01 23:48:30.519619 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 23:48:30.519627 | orchestrator | Sunday 01 June 2025 23:47:46 +0000 (0:00:00.173) 0:00:00.173 *********** 2025-06-01 23:48:30.519635 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 23:48:30.519643 | orchestrator | 2025-06-01 23:48:30.519651 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 23:48:30.519659 | orchestrator | Sunday 01 June 2025 23:47:47 +0000 (0:00:00.785) 0:00:00.959 *********** 2025-06-01 23:48:30.519666 | orchestrator | changed: [testbed-manager] 2025-06-01 23:48:30.519674 | orchestrator | 2025-06-01 23:48:30.519682 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-01 23:48:30.519689 | orchestrator | Sunday 01 June 2025 23:47:48 +0000 (0:00:01.092) 0:00:02.051 *********** 2025-06-01 23:48:30.519696 | orchestrator | changed: [testbed-manager] 2025-06-01 23:48:30.519704 | orchestrator | 2025-06-01 23:48:30.519711 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:48:30.519719 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:48:30.519728 | orchestrator | 2025-06-01 23:48:30.519736 | orchestrator | 2025-06-01 23:48:30.519743 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:48:30.519750 | orchestrator | Sunday 01 June 2025 23:47:48 +0000 (0:00:00.421) 0:00:02.473 *********** 2025-06-01 23:48:30.519757 | orchestrator | =============================================================================== 2025-06-01 23:48:30.519765 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2025-06-01 23:48:30.519772 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-06-01 23:48:30.519779 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2025-06-01 23:48:30.519786 | orchestrator | 2025-06-01 23:48:30.519793 | orchestrator | 2025-06-01 23:48:30.519801 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-01 23:48:30.519808 | orchestrator | 2025-06-01 23:48:30.519815 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-01 23:48:30.519846 | orchestrator | Sunday 01 June 2025 23:47:47 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-06-01 23:48:30.519854 | orchestrator | ok: [testbed-manager] 2025-06-01 23:48:30.519862 | orchestrator | 2025-06-01 23:48:30.519869 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-01 23:48:30.519877 | orchestrator | Sunday 01 June 2025 23:47:47 +0000 (0:00:00.713) 0:00:00.881 *********** 2025-06-01 23:48:30.519884 | orchestrator | ok: [testbed-manager] 2025-06-01 23:48:30.519891 | orchestrator | 2025-06-01 23:48:30.519898 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-01 23:48:30.519906 | orchestrator | Sunday 01 June 2025 23:47:48 +0000 (0:00:00.497) 0:00:01.378 *********** 2025-06-01 23:48:30.519913 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-01 23:48:30.519920 | orchestrator | 2025-06-01 23:48:30.519927 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-01 23:48:30.519934 | orchestrator | Sunday 01 June 2025 23:47:48 +0000 (0:00:00.648) 0:00:02.026 *********** 2025-06-01 23:48:30.519941 | orchestrator | changed: [testbed-manager] 2025-06-01 23:48:30.519948 | orchestrator | 2025-06-01 23:48:30.519956 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-01 23:48:30.519963 | orchestrator | Sunday 01 June 2025 23:47:49 +0000 (0:00:01.023) 0:00:03.050 *********** 2025-06-01 23:48:30.519970 | orchestrator | changed: [testbed-manager] 2025-06-01 23:48:30.519977 | orchestrator | 2025-06-01 23:48:30.519984 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-01 23:48:30.520059 | orchestrator | Sunday 01 June 2025 23:47:50 +0000 (0:00:00.740) 0:00:03.791 *********** 2025-06-01 23:48:30.520073 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 23:48:30.520084 | orchestrator | 2025-06-01 23:48:30.520095 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-01 23:48:30.520106 | orchestrator | Sunday 01 June 2025 23:47:53 +0000 (0:00:02.575) 0:00:06.367 *********** 2025-06-01 23:48:30.520118 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 23:48:30.520130 | orchestrator | 2025-06-01 23:48:30.520141 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-01 23:48:30.520153 | orchestrator | Sunday 01 June 2025 23:47:54 +0000 (0:00:00.910) 0:00:07.277 *********** 2025-06-01 23:48:30.520164 | orchestrator | ok: [testbed-manager] 2025-06-01 23:48:30.520176 | orchestrator | 2025-06-01 23:48:30.520188 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-01 23:48:30.520200 | orchestrator | Sunday 01 June 2025 23:47:54 +0000 (0:00:00.438) 0:00:07.716 *********** 2025-06-01 23:48:30.520213 | orchestrator | ok: [testbed-manager] 2025-06-01 23:48:30.520225 | orchestrator | 2025-06-01 23:48:30.520257 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:48:30.520266 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:48:30.520274 | orchestrator | 2025-06-01 23:48:30.520281 | orchestrator | 2025-06-01 23:48:30.520289 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:48:30.520296 | orchestrator | Sunday 01 June 2025 23:47:54 +0000 (0:00:00.362) 0:00:08.078 *********** 2025-06-01 23:48:30.520303 | orchestrator | =============================================================================== 2025-06-01 23:48:30.520310 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.58s 2025-06-01 23:48:30.520317 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2025-06-01 23:48:30.520325 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.91s 2025-06-01 23:48:30.520351 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-06-01 23:48:30.520359 | orchestrator | Get home directory of operator user ------------------------------------- 0.71s 2025-06-01 23:48:30.520366 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2025-06-01 23:48:30.520383 | orchestrator | Create .kube directory -------------------------------------------------- 0.50s 2025-06-01 23:48:30.520390 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-06-01 23:48:30.520397 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.36s 2025-06-01 23:48:30.520405 | orchestrator | 2025-06-01 23:48:30.520412 | orchestrator | 2025-06-01 23:48:30.520419 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-01 23:48:30.520426 | orchestrator | 2025-06-01 23:48:30.520433 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-01 23:48:30.520441 | orchestrator | Sunday 01 June 2025 23:46:12 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-01 23:48:30.520448 | orchestrator | ok: [localhost] => { 2025-06-01 23:48:30.520456 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-01 23:48:30.520464 | orchestrator | } 2025-06-01 23:48:30.520472 | orchestrator | 2025-06-01 23:48:30.520479 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-01 23:48:30.520486 | orchestrator | Sunday 01 June 2025 23:46:12 +0000 (0:00:00.128) 0:00:00.381 *********** 2025-06-01 23:48:30.520495 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-01 23:48:30.520504 | orchestrator | ...ignoring 2025-06-01 23:48:30.520512 | orchestrator | 2025-06-01 23:48:30.520519 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-01 23:48:30.520526 | orchestrator | Sunday 01 June 2025 23:46:16 +0000 (0:00:03.886) 0:00:04.267 *********** 2025-06-01 23:48:30.520533 | orchestrator | skipping: [localhost] 2025-06-01 23:48:30.520541 | orchestrator | 2025-06-01 23:48:30.520548 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-01 23:48:30.520555 | orchestrator | Sunday 01 June 2025 23:46:16 +0000 (0:00:00.139) 0:00:04.406 *********** 2025-06-01 23:48:30.520563 | orchestrator | ok: [localhost] 2025-06-01 23:48:30.520570 | orchestrator | 2025-06-01 23:48:30.520577 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:48:30.520585 | orchestrator | 2025-06-01 23:48:30.520592 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:48:30.520599 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:00.397) 0:00:04.803 *********** 2025-06-01 23:48:30.520606 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:48:30.520613 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:48:30.520621 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:48:30.520628 | orchestrator | 2025-06-01 23:48:30.520635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:48:30.520642 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:00.851) 0:00:05.655 *********** 2025-06-01 23:48:30.520650 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-01 23:48:30.520658 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-01 23:48:30.520665 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-01 23:48:30.520672 | orchestrator | 2025-06-01 23:48:30.520679 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-01 23:48:30.520687 | orchestrator | 2025-06-01 23:48:30.520694 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 23:48:30.520701 | orchestrator | Sunday 01 June 2025 23:46:19 +0000 (0:00:01.122) 0:00:06.778 *********** 2025-06-01 23:48:30.520709 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:48:30.520717 | orchestrator | 2025-06-01 23:48:30.520724 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-01 23:48:30.520732 | orchestrator | Sunday 01 June 2025 23:46:20 +0000 (0:00:01.403) 0:00:08.181 *********** 2025-06-01 23:48:30.520739 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:48:30.520751 | orchestrator | 2025-06-01 23:48:30.520758 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-01 23:48:30.520766 | orchestrator | Sunday 01 June 2025 23:46:22 +0000 (0:00:01.987) 0:00:10.169 *********** 2025-06-01 23:48:30.520773 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.520780 | orchestrator | 2025-06-01 23:48:30.520787 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-01 23:48:30.520795 | orchestrator | Sunday 01 June 2025 23:46:23 +0000 (0:00:00.551) 0:00:10.720 *********** 2025-06-01 23:48:30.520802 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.520809 | orchestrator | 2025-06-01 23:48:30.520816 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-01 23:48:30.520829 | orchestrator | Sunday 01 June 2025 23:46:23 +0000 (0:00:00.341) 0:00:11.061 *********** 2025-06-01 23:48:30.520836 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.520843 | orchestrator | 2025-06-01 23:48:30.520851 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-01 23:48:30.520858 | orchestrator | Sunday 01 June 2025 23:46:23 +0000 (0:00:00.392) 0:00:11.454 *********** 2025-06-01 23:48:30.520865 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.520872 | orchestrator | 2025-06-01 23:48:30.520879 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 23:48:30.520887 | orchestrator | Sunday 01 June 2025 23:46:24 +0000 (0:00:00.477) 0:00:11.931 *********** 2025-06-01 23:48:30.520894 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:48:30.520901 | orchestrator | 2025-06-01 23:48:30.520908 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-01 23:48:30.520921 | orchestrator | Sunday 01 June 2025 23:46:25 +0000 (0:00:00.783) 0:00:12.715 *********** 2025-06-01 23:48:30.520929 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:48:30.520936 | orchestrator | 2025-06-01 23:48:30.520943 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-01 23:48:30.520950 | orchestrator | Sunday 01 June 2025 23:46:25 +0000 (0:00:00.931) 0:00:13.646 *********** 2025-06-01 23:48:30.520958 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.520965 | orchestrator | 2025-06-01 23:48:30.520972 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-01 23:48:30.520979 | orchestrator | Sunday 01 June 2025 23:46:26 +0000 (0:00:00.437) 0:00:14.084 *********** 2025-06-01 23:48:30.520986 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.521012 | orchestrator | 2025-06-01 23:48:30.521021 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-01 23:48:30.521028 | orchestrator | Sunday 01 June 2025 23:46:26 +0000 (0:00:00.387) 0:00:14.471 *********** 2025-06-01 23:48:30.521040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521124 | orchestrator | 2025-06-01 23:48:30.521131 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-01 23:48:30.521139 | orchestrator | Sunday 01 June 2025 23:46:27 +0000 (0:00:00.907) 0:00:15.378 *********** 2025-06-01 23:48:30.521154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521194 | orchestrator | 2025-06-01 23:48:30.521205 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-01 23:48:30.521217 | orchestrator | Sunday 01 June 2025 23:46:29 +0000 (0:00:01.949) 0:00:17.328 *********** 2025-06-01 23:48:30.521230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 23:48:30.521244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 23:48:30.521261 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-01 23:48:30.521269 | orchestrator | 2025-06-01 23:48:30.521276 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-01 23:48:30.521284 | orchestrator | Sunday 01 June 2025 23:46:31 +0000 (0:00:01.716) 0:00:19.045 *********** 2025-06-01 23:48:30.521291 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 23:48:30.521298 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 23:48:30.521305 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-01 23:48:30.521312 | orchestrator | 2025-06-01 23:48:30.521320 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-01 23:48:30.521331 | orchestrator | Sunday 01 June 2025 23:46:35 +0000 (0:00:04.012) 0:00:23.058 *********** 2025-06-01 23:48:30.521339 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 23:48:30.521346 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 23:48:30.521354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-01 23:48:30.521362 | orchestrator | 2025-06-01 23:48:30.521371 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-01 23:48:30.521379 | orchestrator | Sunday 01 June 2025 23:46:36 +0000 (0:00:01.510) 0:00:24.568 *********** 2025-06-01 23:48:30.521388 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 23:48:30.521396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 23:48:30.521405 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-01 23:48:30.521414 | orchestrator | 2025-06-01 23:48:30.521422 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-01 23:48:30.521437 | orchestrator | Sunday 01 June 2025 23:46:38 +0000 (0:00:02.116) 0:00:26.685 *********** 2025-06-01 23:48:30.521446 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 23:48:30.521455 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 23:48:30.521464 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-01 23:48:30.521472 | orchestrator | 2025-06-01 23:48:30.521481 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-01 23:48:30.521490 | orchestrator | Sunday 01 June 2025 23:46:40 +0000 (0:00:01.448) 0:00:28.133 *********** 2025-06-01 23:48:30.521498 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 23:48:30.521507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 23:48:30.521516 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-01 23:48:30.521525 | orchestrator | 2025-06-01 23:48:30.521534 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-01 23:48:30.521543 | orchestrator | Sunday 01 June 2025 23:46:41 +0000 (0:00:01.510) 0:00:29.643 *********** 2025-06-01 23:48:30.521551 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.521560 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:48:30.521569 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:48:30.521578 | orchestrator | 2025-06-01 23:48:30.521586 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-01 23:48:30.521595 | orchestrator | Sunday 01 June 2025 23:46:42 +0000 (0:00:00.504) 0:00:30.148 *********** 2025-06-01 23:48:30.521605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:48:30.521652 | orchestrator | 2025-06-01 23:48:30.521661 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-01 23:48:30.521669 | orchestrator | Sunday 01 June 2025 23:46:44 +0000 (0:00:02.036) 0:00:32.184 *********** 2025-06-01 23:48:30.521678 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:48:30.521687 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:48:30.521696 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:48:30.521704 | orchestrator | 2025-06-01 23:48:30.521713 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-01 23:48:30.521722 | orchestrator | Sunday 01 June 2025 23:46:45 +0000 (0:00:00.933) 0:00:33.118 *********** 2025-06-01 23:48:30.521731 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:48:30.521739 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:48:30.521748 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:48:30.521757 | orchestrator | 2025-06-01 23:48:30.521765 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-01 23:48:30.521774 | orchestrator | Sunday 01 June 2025 23:46:52 +0000 (0:00:07.093) 0:00:40.212 *********** 2025-06-01 23:48:30.521783 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:48:30.521792 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:48:30.521800 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:48:30.521809 | orchestrator | 2025-06-01 23:48:30.521818 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 23:48:30.521827 | orchestrator | 2025-06-01 23:48:30.521835 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 23:48:30.521844 | orchestrator | Sunday 01 June 2025 23:46:53 +0000 (0:00:00.541) 0:00:40.753 *********** 2025-06-01 23:48:30.521853 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:48:30.521861 | orchestrator | 2025-06-01 23:48:30.521870 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 23:48:30.521879 | orchestrator | Sunday 01 June 2025 23:46:53 +0000 (0:00:00.730) 0:00:41.484 *********** 2025-06-01 23:48:30.521887 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:48:30.521896 | orchestrator | 2025-06-01 23:48:30.521905 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 23:48:30.521914 | orchestrator | Sunday 01 June 2025 23:46:54 +0000 (0:00:00.289) 0:00:41.773 *********** 2025-06-01 23:48:30.521922 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:48:30.521931 | orchestrator | 2025-06-01 23:48:30.521940 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 23:48:30.521949 | orchestrator | Sunday 01 June 2025 23:46:55 +0000 (0:00:01.663) 0:00:43.437 *********** 2025-06-01 23:48:30.521957 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:48:30.521966 | orchestrator | 2025-06-01 23:48:30.521975 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 23:48:30.521983 | orchestrator | 2025-06-01 23:48:30.521992 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 23:48:30.522076 | orchestrator | Sunday 01 June 2025 23:47:50 +0000 (0:00:54.785) 0:01:38.222 *********** 2025-06-01 23:48:30.522091 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:48:30.522100 | orchestrator | 2025-06-01 23:48:30.522109 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 23:48:30.522118 | orchestrator | Sunday 01 June 2025 23:47:51 +0000 (0:00:00.531) 0:01:38.754 *********** 2025-06-01 23:48:30.522127 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:48:30.522135 | orchestrator | 2025-06-01 23:48:30.522149 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 23:48:30.522158 | orchestrator | Sunday 01 June 2025 23:47:51 +0000 (0:00:00.444) 0:01:39.198 *********** 2025-06-01 23:48:30.522224 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:48:30.522239 | orchestrator | 2025-06-01 23:48:30.522254 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 23:48:30.522269 | orchestrator | Sunday 01 June 2025 23:47:53 +0000 (0:00:01.996) 0:01:41.195 *********** 2025-06-01 23:48:30.522284 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:48:30.522300 | orchestrator | 2025-06-01 23:48:30.522315 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-01 23:48:30.522330 | orchestrator | 2025-06-01 23:48:30.522346 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-01 23:48:30.522361 | orchestrator | Sunday 01 June 2025 23:48:06 +0000 (0:00:13.386) 0:01:54.581 *********** 2025-06-01 23:48:30.522373 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:48:30.522382 | orchestrator | 2025-06-01 23:48:30.522399 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-01 23:48:30.522408 | orchestrator | Sunday 01 June 2025 23:48:07 +0000 (0:00:00.580) 0:01:55.161 *********** 2025-06-01 23:48:30.522417 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:48:30.522426 | orchestrator | 2025-06-01 23:48:30.522434 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-01 23:48:30.522443 | orchestrator | Sunday 01 June 2025 23:48:07 +0000 (0:00:00.221) 0:01:55.383 *********** 2025-06-01 23:48:30.522452 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:48:30.522460 | orchestrator | 2025-06-01 23:48:30.522469 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-01 23:48:30.522478 | orchestrator | Sunday 01 June 2025 23:48:14 +0000 (0:00:06.848) 0:02:02.232 *********** 2025-06-01 23:48:30.522487 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:48:30.522495 | orchestrator | 2025-06-01 23:48:30.522504 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-01 23:48:30.522513 | orchestrator | 2025-06-01 23:48:30.522522 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-01 23:48:30.522530 | orchestrator | Sunday 01 June 2025 23:48:25 +0000 (0:00:11.194) 0:02:13.426 *********** 2025-06-01 23:48:30.522539 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:48:30.522547 | orchestrator | 2025-06-01 23:48:30.522556 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-01 23:48:30.522565 | orchestrator | Sunday 01 June 2025 23:48:26 +0000 (0:00:00.684) 0:02:14.110 *********** 2025-06-01 23:48:30.522574 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 23:48:30.522582 | orchestrator | enable_outward_rabbitmq_True 2025-06-01 23:48:30.522591 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-01 23:48:30.522600 | orchestrator | outward_rabbitmq_restart 2025-06-01 23:48:30.522608 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:48:30.522617 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:48:30.522626 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:48:30.522634 | orchestrator | 2025-06-01 23:48:30.522643 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-01 23:48:30.522652 | orchestrator | skipping: no hosts matched 2025-06-01 23:48:30.522660 | orchestrator | 2025-06-01 23:48:30.522669 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-01 23:48:30.522678 | orchestrator | skipping: no hosts matched 2025-06-01 23:48:30.522701 | orchestrator | 2025-06-01 23:48:30.522710 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-01 23:48:30.522719 | orchestrator | skipping: no hosts matched 2025-06-01 23:48:30.522728 | orchestrator | 2025-06-01 23:48:30.522736 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:48:30.522745 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-01 23:48:30.522755 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-01 23:48:30.522764 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:48:30.522772 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:48:30.522781 | orchestrator | 2025-06-01 23:48:30.522790 | orchestrator | 2025-06-01 23:48:30.522799 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:48:30.522807 | orchestrator | Sunday 01 June 2025 23:48:28 +0000 (0:00:02.148) 0:02:16.259 *********** 2025-06-01 23:48:30.522816 | orchestrator | =============================================================================== 2025-06-01 23:48:30.522825 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.37s 2025-06-01 23:48:30.522833 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.51s 2025-06-01 23:48:30.522842 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.09s 2025-06-01 23:48:30.522851 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.01s 2025-06-01 23:48:30.522859 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.89s 2025-06-01 23:48:30.522868 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.15s 2025-06-01 23:48:30.522877 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.12s 2025-06-01 23:48:30.522885 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.04s 2025-06-01 23:48:30.522894 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.99s 2025-06-01 23:48:30.522903 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.95s 2025-06-01 23:48:30.522912 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.84s 2025-06-01 23:48:30.522921 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.72s 2025-06-01 23:48:30.522930 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.51s 2025-06-01 23:48:30.522938 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.51s 2025-06-01 23:48:30.522947 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.45s 2025-06-01 23:48:30.522956 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.40s 2025-06-01 23:48:30.522965 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2025-06-01 23:48:30.522978 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.96s 2025-06-01 23:48:30.522987 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.93s 2025-06-01 23:48:30.523047 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.93s 2025-06-01 23:48:30.523059 | orchestrator | 2025-06-01 23:48:30 | INFO  | Task cf7790bc-b9f2-462a-b640-62fc0b8882d4 is in state SUCCESS 2025-06-01 23:48:30.523068 | orchestrator | 2025-06-01 23:48:30 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:30.523349 | orchestrator | 2025-06-01 23:48:30 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:30.525398 | orchestrator | 2025-06-01 23:48:30 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:30.525452 | orchestrator | 2025-06-01 23:48:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:33.567309 | orchestrator | 2025-06-01 23:48:33 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:33.568670 | orchestrator | 2025-06-01 23:48:33 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:33.570383 | orchestrator | 2025-06-01 23:48:33 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:33.570420 | orchestrator | 2025-06-01 23:48:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:36.606401 | orchestrator | 2025-06-01 23:48:36 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:36.608425 | orchestrator | 2025-06-01 23:48:36 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:36.609831 | orchestrator | 2025-06-01 23:48:36 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:36.609872 | orchestrator | 2025-06-01 23:48:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:39.645268 | orchestrator | 2025-06-01 23:48:39 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:39.645806 | orchestrator | 2025-06-01 23:48:39 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:39.647141 | orchestrator | 2025-06-01 23:48:39 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:39.647171 | orchestrator | 2025-06-01 23:48:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:42.682628 | orchestrator | 2025-06-01 23:48:42 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:42.682737 | orchestrator | 2025-06-01 23:48:42 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:42.683512 | orchestrator | 2025-06-01 23:48:42 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:42.683535 | orchestrator | 2025-06-01 23:48:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:45.727575 | orchestrator | 2025-06-01 23:48:45 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:45.729224 | orchestrator | 2025-06-01 23:48:45 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:45.729905 | orchestrator | 2025-06-01 23:48:45 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:45.729929 | orchestrator | 2025-06-01 23:48:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:48.762670 | orchestrator | 2025-06-01 23:48:48 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:48.764273 | orchestrator | 2025-06-01 23:48:48 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:48.766892 | orchestrator | 2025-06-01 23:48:48 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:48.766974 | orchestrator | 2025-06-01 23:48:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:51.802978 | orchestrator | 2025-06-01 23:48:51 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:51.803873 | orchestrator | 2025-06-01 23:48:51 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:51.806195 | orchestrator | 2025-06-01 23:48:51 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:51.806236 | orchestrator | 2025-06-01 23:48:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:54.848200 | orchestrator | 2025-06-01 23:48:54 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:54.852467 | orchestrator | 2025-06-01 23:48:54 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:54.853415 | orchestrator | 2025-06-01 23:48:54 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:54.853450 | orchestrator | 2025-06-01 23:48:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:48:57.898832 | orchestrator | 2025-06-01 23:48:57 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:48:57.900110 | orchestrator | 2025-06-01 23:48:57 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:48:57.902432 | orchestrator | 2025-06-01 23:48:57 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:48:57.902457 | orchestrator | 2025-06-01 23:48:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:00.947395 | orchestrator | 2025-06-01 23:49:00 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:00.949573 | orchestrator | 2025-06-01 23:49:00 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:00.951071 | orchestrator | 2025-06-01 23:49:00 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:00.951285 | orchestrator | 2025-06-01 23:49:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:03.990928 | orchestrator | 2025-06-01 23:49:03 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:03.991495 | orchestrator | 2025-06-01 23:49:03 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:03.994417 | orchestrator | 2025-06-01 23:49:03 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:03.995274 | orchestrator | 2025-06-01 23:49:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:07.039682 | orchestrator | 2025-06-01 23:49:07 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:07.039888 | orchestrator | 2025-06-01 23:49:07 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:07.040811 | orchestrator | 2025-06-01 23:49:07 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:07.040915 | orchestrator | 2025-06-01 23:49:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:10.081336 | orchestrator | 2025-06-01 23:49:10 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:10.081446 | orchestrator | 2025-06-01 23:49:10 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:10.081461 | orchestrator | 2025-06-01 23:49:10 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:10.081472 | orchestrator | 2025-06-01 23:49:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:13.119581 | orchestrator | 2025-06-01 23:49:13 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:13.119690 | orchestrator | 2025-06-01 23:49:13 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:13.121189 | orchestrator | 2025-06-01 23:49:13 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:13.121249 | orchestrator | 2025-06-01 23:49:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:16.155655 | orchestrator | 2025-06-01 23:49:16 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:16.161926 | orchestrator | 2025-06-01 23:49:16 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:16.162183 | orchestrator | 2025-06-01 23:49:16 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:16.162204 | orchestrator | 2025-06-01 23:49:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:19.208377 | orchestrator | 2025-06-01 23:49:19 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:19.210428 | orchestrator | 2025-06-01 23:49:19 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:19.212608 | orchestrator | 2025-06-01 23:49:19 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:19.212662 | orchestrator | 2025-06-01 23:49:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:22.269058 | orchestrator | 2025-06-01 23:49:22 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:22.271485 | orchestrator | 2025-06-01 23:49:22 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:22.273646 | orchestrator | 2025-06-01 23:49:22 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:22.273757 | orchestrator | 2025-06-01 23:49:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:25.320456 | orchestrator | 2025-06-01 23:49:25 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:25.321727 | orchestrator | 2025-06-01 23:49:25 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:25.323681 | orchestrator | 2025-06-01 23:49:25 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:25.323725 | orchestrator | 2025-06-01 23:49:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:28.386175 | orchestrator | 2025-06-01 23:49:28 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:28.386962 | orchestrator | 2025-06-01 23:49:28 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:28.388539 | orchestrator | 2025-06-01 23:49:28 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:28.388565 | orchestrator | 2025-06-01 23:49:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:31.434759 | orchestrator | 2025-06-01 23:49:31 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:31.437468 | orchestrator | 2025-06-01 23:49:31 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:31.440707 | orchestrator | 2025-06-01 23:49:31 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:31.440742 | orchestrator | 2025-06-01 23:49:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:34.481447 | orchestrator | 2025-06-01 23:49:34 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:34.484043 | orchestrator | 2025-06-01 23:49:34 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:34.485818 | orchestrator | 2025-06-01 23:49:34 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:34.486712 | orchestrator | 2025-06-01 23:49:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:37.528083 | orchestrator | 2025-06-01 23:49:37 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:37.528918 | orchestrator | 2025-06-01 23:49:37 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:37.531777 | orchestrator | 2025-06-01 23:49:37 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:37.532393 | orchestrator | 2025-06-01 23:49:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:40.571719 | orchestrator | 2025-06-01 23:49:40 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:40.571979 | orchestrator | 2025-06-01 23:49:40 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:40.576092 | orchestrator | 2025-06-01 23:49:40 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:40.576170 | orchestrator | 2025-06-01 23:49:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:43.615808 | orchestrator | 2025-06-01 23:49:43 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state STARTED 2025-06-01 23:49:43.617632 | orchestrator | 2025-06-01 23:49:43 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:43.618256 | orchestrator | 2025-06-01 23:49:43 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:43.618336 | orchestrator | 2025-06-01 23:49:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:46.672701 | orchestrator | 2025-06-01 23:49:46 | INFO  | Task cadbd4c5-6e2a-4b82-814b-82036705b9c4 is in state SUCCESS 2025-06-01 23:49:46.673623 | orchestrator | 2025-06-01 23:49:46.673668 | orchestrator | 2025-06-01 23:49:46.673680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:49:46.673693 | orchestrator | 2025-06-01 23:49:46.673704 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:49:46.673716 | orchestrator | Sunday 01 June 2025 23:47:09 +0000 (0:00:00.219) 0:00:00.219 *********** 2025-06-01 23:49:46.673728 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.673740 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.673751 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.673763 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:49:46.673774 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:49:46.673785 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:49:46.673796 | orchestrator | 2025-06-01 23:49:46.673808 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:49:46.673819 | orchestrator | Sunday 01 June 2025 23:47:10 +0000 (0:00:00.723) 0:00:00.943 *********** 2025-06-01 23:49:46.673891 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-01 23:49:46.673904 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-01 23:49:46.673915 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-01 23:49:46.673927 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-01 23:49:46.673938 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-01 23:49:46.673949 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-01 23:49:46.673961 | orchestrator | 2025-06-01 23:49:46.673980 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-01 23:49:46.674103 | orchestrator | 2025-06-01 23:49:46.674117 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-01 23:49:46.674175 | orchestrator | Sunday 01 June 2025 23:47:11 +0000 (0:00:01.034) 0:00:01.977 *********** 2025-06-01 23:49:46.674190 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:49:46.674229 | orchestrator | 2025-06-01 23:49:46.674241 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-01 23:49:46.674252 | orchestrator | Sunday 01 June 2025 23:47:12 +0000 (0:00:01.304) 0:00:03.281 *********** 2025-06-01 23:49:46.674266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674303 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674352 | orchestrator | 2025-06-01 23:49:46.674379 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-01 23:49:46.674391 | orchestrator | Sunday 01 June 2025 23:47:14 +0000 (0:00:01.833) 0:00:05.115 *********** 2025-06-01 23:49:46.674403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674479 | orchestrator | 2025-06-01 23:49:46.674490 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-01 23:49:46.674500 | orchestrator | Sunday 01 June 2025 23:47:16 +0000 (0:00:01.886) 0:00:07.002 *********** 2025-06-01 23:49:46.674512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674559 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674599 | orchestrator | 2025-06-01 23:49:46.674610 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-01 23:49:46.674621 | orchestrator | Sunday 01 June 2025 23:47:18 +0000 (0:00:01.880) 0:00:08.882 *********** 2025-06-01 23:49:46.674632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674704 | orchestrator | 2025-06-01 23:49:46.674721 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-01 23:49:46.674733 | orchestrator | Sunday 01 June 2025 23:47:21 +0000 (0:00:02.552) 0:00:11.435 *********** 2025-06-01 23:49:46.674744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.674818 | orchestrator | 2025-06-01 23:49:46.674829 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-01 23:49:46.674840 | orchestrator | Sunday 01 June 2025 23:47:23 +0000 (0:00:01.942) 0:00:13.377 *********** 2025-06-01 23:49:46.674851 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.674862 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.674873 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.674883 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:49:46.674894 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:49:46.674905 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:49:46.674916 | orchestrator | 2025-06-01 23:49:46.674927 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-01 23:49:46.674937 | orchestrator | Sunday 01 June 2025 23:47:25 +0000 (0:00:02.432) 0:00:15.809 *********** 2025-06-01 23:49:46.674948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-01 23:49:46.674960 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-01 23:49:46.674970 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-01 23:49:46.674981 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-01 23:49:46.675021 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 23:49:46.675051 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-01 23:49:46.675075 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-01 23:49:46.675094 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 23:49:46.675113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 23:49:46.675124 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 23:49:46.675135 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 23:49:46.675146 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-01 23:49:46.675158 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 23:49:46.675170 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 23:49:46.675181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 23:49:46.675192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 23:49:46.675203 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 23:49:46.675214 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-01 23:49:46.675225 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 23:49:46.675237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 23:49:46.675248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 23:49:46.675258 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 23:49:46.675269 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 23:49:46.675280 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-01 23:49:46.675290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 23:49:46.675301 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 23:49:46.675312 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 23:49:46.675322 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 23:49:46.675333 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 23:49:46.675344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-01 23:49:46.675354 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 23:49:46.675365 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 23:49:46.675376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 23:49:46.675387 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 23:49:46.675398 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 23:49:46.675416 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-01 23:49:46.675427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 23:49:46.675438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 23:49:46.675449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-01 23:49:46.675460 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 23:49:46.675470 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 23:49:46.675481 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-01 23:49:46.675496 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-01 23:49:46.675508 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-01 23:49:46.675524 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-01 23:49:46.675536 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-01 23:49:46.675547 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-01 23:49:46.675564 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-01 23:49:46.675581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 23:49:46.675599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 23:49:46.675617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-01 23:49:46.675634 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 23:49:46.675646 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 23:49:46.675657 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-01 23:49:46.675667 | orchestrator | 2025-06-01 23:49:46.675678 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 23:49:46.675689 | orchestrator | Sunday 01 June 2025 23:47:45 +0000 (0:00:20.374) 0:00:36.184 *********** 2025-06-01 23:49:46.675700 | orchestrator | 2025-06-01 23:49:46.675711 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 23:49:46.675722 | orchestrator | Sunday 01 June 2025 23:47:45 +0000 (0:00:00.089) 0:00:36.274 *********** 2025-06-01 23:49:46.675732 | orchestrator | 2025-06-01 23:49:46.675743 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 23:49:46.675754 | orchestrator | Sunday 01 June 2025 23:47:45 +0000 (0:00:00.086) 0:00:36.361 *********** 2025-06-01 23:49:46.675764 | orchestrator | 2025-06-01 23:49:46.675775 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 23:49:46.675786 | orchestrator | Sunday 01 June 2025 23:47:46 +0000 (0:00:00.078) 0:00:36.439 *********** 2025-06-01 23:49:46.675797 | orchestrator | 2025-06-01 23:49:46.675815 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 23:49:46.675826 | orchestrator | Sunday 01 June 2025 23:47:46 +0000 (0:00:00.069) 0:00:36.509 *********** 2025-06-01 23:49:46.675841 | orchestrator | 2025-06-01 23:49:46.675857 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-01 23:49:46.675869 | orchestrator | Sunday 01 June 2025 23:47:46 +0000 (0:00:00.088) 0:00:36.598 *********** 2025-06-01 23:49:46.675879 | orchestrator | 2025-06-01 23:49:46.675890 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-01 23:49:46.675901 | orchestrator | Sunday 01 June 2025 23:47:46 +0000 (0:00:00.155) 0:00:36.753 *********** 2025-06-01 23:49:46.675912 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.675923 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.675934 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:49:46.675944 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:49:46.675955 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.675966 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:49:46.675977 | orchestrator | 2025-06-01 23:49:46.676013 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-01 23:49:46.676027 | orchestrator | Sunday 01 June 2025 23:47:48 +0000 (0:00:02.364) 0:00:39.118 *********** 2025-06-01 23:49:46.676037 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.676048 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.676059 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:49:46.676070 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.676080 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:49:46.676091 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:49:46.676102 | orchestrator | 2025-06-01 23:49:46.676113 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-01 23:49:46.676124 | orchestrator | 2025-06-01 23:49:46.676135 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 23:49:46.676146 | orchestrator | Sunday 01 June 2025 23:48:26 +0000 (0:00:38.150) 0:01:17.268 *********** 2025-06-01 23:49:46.676157 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:49:46.676168 | orchestrator | 2025-06-01 23:49:46.676179 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 23:49:46.676190 | orchestrator | Sunday 01 June 2025 23:48:27 +0000 (0:00:00.510) 0:01:17.778 *********** 2025-06-01 23:49:46.676201 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:49:46.676212 | orchestrator | 2025-06-01 23:49:46.676222 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-01 23:49:46.676233 | orchestrator | Sunday 01 June 2025 23:48:28 +0000 (0:00:00.789) 0:01:18.567 *********** 2025-06-01 23:49:46.676244 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.676255 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.676266 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.676276 | orchestrator | 2025-06-01 23:49:46.676296 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-01 23:49:46.676312 | orchestrator | Sunday 01 June 2025 23:48:29 +0000 (0:00:00.808) 0:01:19.376 *********** 2025-06-01 23:49:46.676323 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.676334 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.676345 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.676362 | orchestrator | 2025-06-01 23:49:46.676373 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-01 23:49:46.676385 | orchestrator | Sunday 01 June 2025 23:48:29 +0000 (0:00:00.323) 0:01:19.700 *********** 2025-06-01 23:49:46.676395 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.676406 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.676417 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.676428 | orchestrator | 2025-06-01 23:49:46.676439 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-01 23:49:46.676457 | orchestrator | Sunday 01 June 2025 23:48:29 +0000 (0:00:00.399) 0:01:20.099 *********** 2025-06-01 23:49:46.676468 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.676479 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.676489 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.676500 | orchestrator | 2025-06-01 23:49:46.676511 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-01 23:49:46.676521 | orchestrator | Sunday 01 June 2025 23:48:30 +0000 (0:00:00.523) 0:01:20.622 *********** 2025-06-01 23:49:46.676532 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.676543 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.676553 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.676564 | orchestrator | 2025-06-01 23:49:46.676575 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-01 23:49:46.676586 | orchestrator | Sunday 01 June 2025 23:48:30 +0000 (0:00:00.312) 0:01:20.934 *********** 2025-06-01 23:49:46.676596 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.676607 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.676618 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.676629 | orchestrator | 2025-06-01 23:49:46.676639 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-01 23:49:46.676650 | orchestrator | Sunday 01 June 2025 23:48:30 +0000 (0:00:00.288) 0:01:21.223 *********** 2025-06-01 23:49:46.676661 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.676672 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.676682 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.676693 | orchestrator | 2025-06-01 23:49:46.676704 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-01 23:49:46.676714 | orchestrator | Sunday 01 June 2025 23:48:31 +0000 (0:00:00.279) 0:01:21.503 *********** 2025-06-01 23:49:46.676725 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.676736 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.676747 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.676757 | orchestrator | 2025-06-01 23:49:46.676768 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-01 23:49:46.676779 | orchestrator | Sunday 01 June 2025 23:48:31 +0000 (0:00:00.481) 0:01:21.984 *********** 2025-06-01 23:49:46.676790 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.676801 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.676811 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.676822 | orchestrator | 2025-06-01 23:49:46.676833 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-01 23:49:46.676844 | orchestrator | Sunday 01 June 2025 23:48:31 +0000 (0:00:00.319) 0:01:22.303 *********** 2025-06-01 23:49:46.676854 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.676865 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.676876 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.676886 | orchestrator | 2025-06-01 23:49:46.676897 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-01 23:49:46.676908 | orchestrator | Sunday 01 June 2025 23:48:32 +0000 (0:00:00.281) 0:01:22.585 *********** 2025-06-01 23:49:46.676919 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.676929 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.676940 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.676951 | orchestrator | 2025-06-01 23:49:46.676961 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-01 23:49:46.676973 | orchestrator | Sunday 01 June 2025 23:48:32 +0000 (0:00:00.298) 0:01:22.883 *********** 2025-06-01 23:49:46.676983 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677019 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677030 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677041 | orchestrator | 2025-06-01 23:49:46.677052 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-01 23:49:46.677062 | orchestrator | Sunday 01 June 2025 23:48:33 +0000 (0:00:00.517) 0:01:23.401 *********** 2025-06-01 23:49:46.677086 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677096 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677107 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677118 | orchestrator | 2025-06-01 23:49:46.677129 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-01 23:49:46.677140 | orchestrator | Sunday 01 June 2025 23:48:33 +0000 (0:00:00.293) 0:01:23.694 *********** 2025-06-01 23:49:46.677151 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677161 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677172 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677183 | orchestrator | 2025-06-01 23:49:46.677194 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-01 23:49:46.677204 | orchestrator | Sunday 01 June 2025 23:48:33 +0000 (0:00:00.381) 0:01:24.076 *********** 2025-06-01 23:49:46.677215 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677226 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677237 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677247 | orchestrator | 2025-06-01 23:49:46.677258 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-01 23:49:46.677269 | orchestrator | Sunday 01 June 2025 23:48:34 +0000 (0:00:00.408) 0:01:24.484 *********** 2025-06-01 23:49:46.677280 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677290 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677301 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677312 | orchestrator | 2025-06-01 23:49:46.677328 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-01 23:49:46.677339 | orchestrator | Sunday 01 June 2025 23:48:34 +0000 (0:00:00.882) 0:01:25.367 *********** 2025-06-01 23:49:46.677350 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677361 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677377 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677389 | orchestrator | 2025-06-01 23:49:46.677400 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-01 23:49:46.677411 | orchestrator | Sunday 01 June 2025 23:48:35 +0000 (0:00:00.612) 0:01:25.980 *********** 2025-06-01 23:49:46.677422 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:49:46.677433 | orchestrator | 2025-06-01 23:49:46.677443 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-01 23:49:46.677454 | orchestrator | Sunday 01 June 2025 23:48:36 +0000 (0:00:00.731) 0:01:26.711 *********** 2025-06-01 23:49:46.677465 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.677476 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.677487 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.677497 | orchestrator | 2025-06-01 23:49:46.677563 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-01 23:49:46.677576 | orchestrator | Sunday 01 June 2025 23:48:37 +0000 (0:00:00.845) 0:01:27.557 *********** 2025-06-01 23:49:46.677587 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.677598 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.677609 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.677619 | orchestrator | 2025-06-01 23:49:46.677630 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-01 23:49:46.677641 | orchestrator | Sunday 01 June 2025 23:48:37 +0000 (0:00:00.560) 0:01:28.118 *********** 2025-06-01 23:49:46.677652 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677662 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677673 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677684 | orchestrator | 2025-06-01 23:49:46.677695 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-01 23:49:46.677705 | orchestrator | Sunday 01 June 2025 23:48:38 +0000 (0:00:00.368) 0:01:28.486 *********** 2025-06-01 23:49:46.677716 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677742 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677805 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677819 | orchestrator | 2025-06-01 23:49:46.677830 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-01 23:49:46.677841 | orchestrator | Sunday 01 June 2025 23:48:38 +0000 (0:00:00.440) 0:01:28.926 *********** 2025-06-01 23:49:46.677851 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677862 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677873 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677883 | orchestrator | 2025-06-01 23:49:46.677894 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-01 23:49:46.677905 | orchestrator | Sunday 01 June 2025 23:48:39 +0000 (0:00:00.612) 0:01:29.539 *********** 2025-06-01 23:49:46.677915 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.677926 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.677937 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.677948 | orchestrator | 2025-06-01 23:49:46.677958 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-01 23:49:46.677969 | orchestrator | Sunday 01 June 2025 23:48:39 +0000 (0:00:00.345) 0:01:29.884 *********** 2025-06-01 23:49:46.677980 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.678083 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.678099 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.678111 | orchestrator | 2025-06-01 23:49:46.678122 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-01 23:49:46.678133 | orchestrator | Sunday 01 June 2025 23:48:39 +0000 (0:00:00.305) 0:01:30.189 *********** 2025-06-01 23:49:46.678144 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.678155 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.678166 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.678176 | orchestrator | 2025-06-01 23:49:46.678187 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-01 23:49:46.678198 | orchestrator | Sunday 01 June 2025 23:48:40 +0000 (0:00:00.338) 0:01:30.527 *********** 2025-06-01 23:49:46.678211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678356 | orchestrator | 2025-06-01 23:49:46.678406 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-01 23:49:46.678423 | orchestrator | Sunday 01 June 2025 23:48:41 +0000 (0:00:01.621) 0:01:32.149 *********** 2025-06-01 23:49:46.678437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678579 | orchestrator | 2025-06-01 23:49:46.678590 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-01 23:49:46.678601 | orchestrator | Sunday 01 June 2025 23:48:45 +0000 (0:00:03.760) 0:01:35.910 *********** 2025-06-01 23:49:46.678612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.678732 | orchestrator | 2025-06-01 23:49:46.678743 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 23:49:46.678754 | orchestrator | Sunday 01 June 2025 23:48:47 +0000 (0:00:02.388) 0:01:38.299 *********** 2025-06-01 23:49:46.678765 | orchestrator | 2025-06-01 23:49:46.678776 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 23:49:46.678788 | orchestrator | Sunday 01 June 2025 23:48:48 +0000 (0:00:00.102) 0:01:38.402 *********** 2025-06-01 23:49:46.678798 | orchestrator | 2025-06-01 23:49:46.678809 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 23:49:46.678820 | orchestrator | Sunday 01 June 2025 23:48:48 +0000 (0:00:00.064) 0:01:38.466 *********** 2025-06-01 23:49:46.678831 | orchestrator | 2025-06-01 23:49:46.678842 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-01 23:49:46.678853 | orchestrator | Sunday 01 June 2025 23:48:48 +0000 (0:00:00.067) 0:01:38.534 *********** 2025-06-01 23:49:46.678864 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.678875 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.678885 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.678896 | orchestrator | 2025-06-01 23:49:46.678907 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-01 23:49:46.678918 | orchestrator | Sunday 01 June 2025 23:48:55 +0000 (0:00:07.335) 0:01:45.869 *********** 2025-06-01 23:49:46.678929 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.678940 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.678951 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.678961 | orchestrator | 2025-06-01 23:49:46.678973 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-01 23:49:46.678984 | orchestrator | Sunday 01 June 2025 23:49:03 +0000 (0:00:07.833) 0:01:53.703 *********** 2025-06-01 23:49:46.679026 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.679045 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.679063 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.679082 | orchestrator | 2025-06-01 23:49:46.679093 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-01 23:49:46.679104 | orchestrator | Sunday 01 June 2025 23:49:05 +0000 (0:00:02.533) 0:01:56.236 *********** 2025-06-01 23:49:46.679123 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.679134 | orchestrator | 2025-06-01 23:49:46.679145 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-01 23:49:46.679156 | orchestrator | Sunday 01 June 2025 23:49:05 +0000 (0:00:00.120) 0:01:56.356 *********** 2025-06-01 23:49:46.679166 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.679177 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.679188 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.679198 | orchestrator | 2025-06-01 23:49:46.679209 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-01 23:49:46.679220 | orchestrator | Sunday 01 June 2025 23:49:06 +0000 (0:00:00.878) 0:01:57.235 *********** 2025-06-01 23:49:46.679231 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.679241 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.679252 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.679263 | orchestrator | 2025-06-01 23:49:46.679273 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-01 23:49:46.679284 | orchestrator | Sunday 01 June 2025 23:49:07 +0000 (0:00:00.942) 0:01:58.177 *********** 2025-06-01 23:49:46.679295 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.679306 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.679316 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.679327 | orchestrator | 2025-06-01 23:49:46.679338 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-01 23:49:46.679349 | orchestrator | Sunday 01 June 2025 23:49:08 +0000 (0:00:00.934) 0:01:59.112 *********** 2025-06-01 23:49:46.679360 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.679370 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.679386 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.679398 | orchestrator | 2025-06-01 23:49:46.679409 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-01 23:49:46.679420 | orchestrator | Sunday 01 June 2025 23:49:09 +0000 (0:00:00.663) 0:01:59.775 *********** 2025-06-01 23:49:46.679431 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.679441 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.679459 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.679471 | orchestrator | 2025-06-01 23:49:46.679482 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-01 23:49:46.679492 | orchestrator | Sunday 01 June 2025 23:49:10 +0000 (0:00:00.860) 0:02:00.635 *********** 2025-06-01 23:49:46.679503 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.679514 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.679525 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.679536 | orchestrator | 2025-06-01 23:49:46.679546 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-01 23:49:46.679557 | orchestrator | Sunday 01 June 2025 23:49:11 +0000 (0:00:01.116) 0:02:01.752 *********** 2025-06-01 23:49:46.679568 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.679579 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.679590 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.679600 | orchestrator | 2025-06-01 23:49:46.679611 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-01 23:49:46.679622 | orchestrator | Sunday 01 June 2025 23:49:11 +0000 (0:00:00.359) 0:02:02.111 *********** 2025-06-01 23:49:46.679633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679645 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679662 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679686 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679697 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679709 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679720 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679749 | orchestrator | 2025-06-01 23:49:46.679760 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-01 23:49:46.679771 | orchestrator | Sunday 01 June 2025 23:49:13 +0000 (0:00:01.354) 0:02:03.465 *********** 2025-06-01 23:49:46.679782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679830 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679864 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679898 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.679932 | orchestrator | 2025-06-01 23:49:46.679948 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-01 23:49:46.679959 | orchestrator | Sunday 01 June 2025 23:49:17 +0000 (0:00:03.932) 0:02:07.398 *********** 2025-06-01 23:49:46.679977 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680049 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680065 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680113 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680147 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-01 23:49:46.680158 | orchestrator | 2025-06-01 23:49:46.680169 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 23:49:46.680180 | orchestrator | Sunday 01 June 2025 23:49:19 +0000 (0:00:02.960) 0:02:10.359 *********** 2025-06-01 23:49:46.680191 | orchestrator | 2025-06-01 23:49:46.680202 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 23:49:46.680213 | orchestrator | Sunday 01 June 2025 23:49:20 +0000 (0:00:00.085) 0:02:10.444 *********** 2025-06-01 23:49:46.680223 | orchestrator | 2025-06-01 23:49:46.680234 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-01 23:49:46.680245 | orchestrator | Sunday 01 June 2025 23:49:20 +0000 (0:00:00.066) 0:02:10.511 *********** 2025-06-01 23:49:46.680255 | orchestrator | 2025-06-01 23:49:46.680271 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-01 23:49:46.680282 | orchestrator | Sunday 01 June 2025 23:49:20 +0000 (0:00:00.072) 0:02:10.583 *********** 2025-06-01 23:49:46.680293 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.680304 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.680315 | orchestrator | 2025-06-01 23:49:46.680332 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-01 23:49:46.680350 | orchestrator | Sunday 01 June 2025 23:49:26 +0000 (0:00:06.348) 0:02:16.932 *********** 2025-06-01 23:49:46.680361 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.680372 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.680383 | orchestrator | 2025-06-01 23:49:46.680394 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-01 23:49:46.680405 | orchestrator | Sunday 01 June 2025 23:49:32 +0000 (0:00:06.124) 0:02:23.057 *********** 2025-06-01 23:49:46.680416 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:49:46.680426 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:49:46.680437 | orchestrator | 2025-06-01 23:49:46.680448 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-01 23:49:46.680459 | orchestrator | Sunday 01 June 2025 23:49:38 +0000 (0:00:06.158) 0:02:29.215 *********** 2025-06-01 23:49:46.680469 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:49:46.680479 | orchestrator | 2025-06-01 23:49:46.680488 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-01 23:49:46.680498 | orchestrator | Sunday 01 June 2025 23:49:38 +0000 (0:00:00.140) 0:02:29.356 *********** 2025-06-01 23:49:46.680508 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.680517 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.680527 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.680536 | orchestrator | 2025-06-01 23:49:46.680546 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-01 23:49:46.680556 | orchestrator | Sunday 01 June 2025 23:49:40 +0000 (0:00:01.138) 0:02:30.494 *********** 2025-06-01 23:49:46.680565 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.680575 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.680584 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.680594 | orchestrator | 2025-06-01 23:49:46.680603 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-01 23:49:46.680613 | orchestrator | Sunday 01 June 2025 23:49:40 +0000 (0:00:00.747) 0:02:31.242 *********** 2025-06-01 23:49:46.680623 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.680633 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.680642 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.680652 | orchestrator | 2025-06-01 23:49:46.680661 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-01 23:49:46.680671 | orchestrator | Sunday 01 June 2025 23:49:41 +0000 (0:00:00.892) 0:02:32.134 *********** 2025-06-01 23:49:46.680681 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:49:46.680691 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:49:46.680700 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:49:46.680710 | orchestrator | 2025-06-01 23:49:46.680719 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-01 23:49:46.680729 | orchestrator | Sunday 01 June 2025 23:49:42 +0000 (0:00:00.631) 0:02:32.766 *********** 2025-06-01 23:49:46.680738 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.680748 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.680758 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.680767 | orchestrator | 2025-06-01 23:49:46.680777 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-01 23:49:46.680787 | orchestrator | Sunday 01 June 2025 23:49:43 +0000 (0:00:00.931) 0:02:33.697 *********** 2025-06-01 23:49:46.680796 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:49:46.680806 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:49:46.680823 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:49:46.680835 | orchestrator | 2025-06-01 23:49:46.680845 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:49:46.680855 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 23:49:46.680869 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-01 23:49:46.680893 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-01 23:49:46.680904 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:49:46.680914 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:49:46.680923 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:49:46.680933 | orchestrator | 2025-06-01 23:49:46.680943 | orchestrator | 2025-06-01 23:49:46.680952 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:49:46.680962 | orchestrator | Sunday 01 June 2025 23:49:44 +0000 (0:00:01.024) 0:02:34.722 *********** 2025-06-01 23:49:46.680971 | orchestrator | =============================================================================== 2025-06-01 23:49:46.680981 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.15s 2025-06-01 23:49:46.681009 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.37s 2025-06-01 23:49:46.681020 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.96s 2025-06-01 23:49:46.681029 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.68s 2025-06-01 23:49:46.681043 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.69s 2025-06-01 23:49:46.681053 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.93s 2025-06-01 23:49:46.681063 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.76s 2025-06-01 23:49:46.681078 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.96s 2025-06-01 23:49:46.681088 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.55s 2025-06-01 23:49:46.681097 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2025-06-01 23:49:46.681107 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.39s 2025-06-01 23:49:46.681116 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.36s 2025-06-01 23:49:46.681125 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.94s 2025-06-01 23:49:46.681135 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.89s 2025-06-01 23:49:46.681144 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.88s 2025-06-01 23:49:46.681153 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.83s 2025-06-01 23:49:46.681163 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.62s 2025-06-01 23:49:46.681172 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.35s 2025-06-01 23:49:46.681181 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.30s 2025-06-01 23:49:46.681191 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.14s 2025-06-01 23:49:46.681200 | orchestrator | 2025-06-01 23:49:46 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:46.681210 | orchestrator | 2025-06-01 23:49:46 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:46.681318 | orchestrator | 2025-06-01 23:49:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:49.724276 | orchestrator | 2025-06-01 23:49:49 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:49.726865 | orchestrator | 2025-06-01 23:49:49 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:49.727052 | orchestrator | 2025-06-01 23:49:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:52.787924 | orchestrator | 2025-06-01 23:49:52 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:52.790944 | orchestrator | 2025-06-01 23:49:52 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:52.791068 | orchestrator | 2025-06-01 23:49:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:55.832486 | orchestrator | 2025-06-01 23:49:55 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:55.835434 | orchestrator | 2025-06-01 23:49:55 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:55.835480 | orchestrator | 2025-06-01 23:49:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:49:58.876583 | orchestrator | 2025-06-01 23:49:58 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:49:58.876886 | orchestrator | 2025-06-01 23:49:58 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:49:58.876909 | orchestrator | 2025-06-01 23:49:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:01.918718 | orchestrator | 2025-06-01 23:50:01 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:01.921855 | orchestrator | 2025-06-01 23:50:01 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:01.921887 | orchestrator | 2025-06-01 23:50:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:04.953923 | orchestrator | 2025-06-01 23:50:04 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:04.954537 | orchestrator | 2025-06-01 23:50:04 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:04.954565 | orchestrator | 2025-06-01 23:50:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:08.000304 | orchestrator | 2025-06-01 23:50:07 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:08.000412 | orchestrator | 2025-06-01 23:50:07 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:08.000427 | orchestrator | 2025-06-01 23:50:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:11.051751 | orchestrator | 2025-06-01 23:50:11 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:11.053739 | orchestrator | 2025-06-01 23:50:11 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:11.053775 | orchestrator | 2025-06-01 23:50:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:14.090343 | orchestrator | 2025-06-01 23:50:14 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:14.094201 | orchestrator | 2025-06-01 23:50:14 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:14.094234 | orchestrator | 2025-06-01 23:50:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:17.143967 | orchestrator | 2025-06-01 23:50:17 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:17.144906 | orchestrator | 2025-06-01 23:50:17 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:17.144937 | orchestrator | 2025-06-01 23:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:20.196787 | orchestrator | 2025-06-01 23:50:20 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:20.198620 | orchestrator | 2025-06-01 23:50:20 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:20.200929 | orchestrator | 2025-06-01 23:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:23.246816 | orchestrator | 2025-06-01 23:50:23 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:23.247384 | orchestrator | 2025-06-01 23:50:23 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:23.247565 | orchestrator | 2025-06-01 23:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:26.306536 | orchestrator | 2025-06-01 23:50:26 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:26.308550 | orchestrator | 2025-06-01 23:50:26 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:26.309212 | orchestrator | 2025-06-01 23:50:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:29.356245 | orchestrator | 2025-06-01 23:50:29 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:29.358112 | orchestrator | 2025-06-01 23:50:29 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:29.358159 | orchestrator | 2025-06-01 23:50:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:32.414533 | orchestrator | 2025-06-01 23:50:32 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:32.417836 | orchestrator | 2025-06-01 23:50:32 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:32.417872 | orchestrator | 2025-06-01 23:50:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:35.463554 | orchestrator | 2025-06-01 23:50:35 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:35.464967 | orchestrator | 2025-06-01 23:50:35 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:35.465080 | orchestrator | 2025-06-01 23:50:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:38.504600 | orchestrator | 2025-06-01 23:50:38 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:38.506884 | orchestrator | 2025-06-01 23:50:38 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:38.507714 | orchestrator | 2025-06-01 23:50:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:41.557784 | orchestrator | 2025-06-01 23:50:41 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:41.557891 | orchestrator | 2025-06-01 23:50:41 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:41.557906 | orchestrator | 2025-06-01 23:50:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:44.611812 | orchestrator | 2025-06-01 23:50:44 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:44.611958 | orchestrator | 2025-06-01 23:50:44 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:44.611975 | orchestrator | 2025-06-01 23:50:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:47.654756 | orchestrator | 2025-06-01 23:50:47 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:47.656102 | orchestrator | 2025-06-01 23:50:47 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:47.656143 | orchestrator | 2025-06-01 23:50:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:50.716352 | orchestrator | 2025-06-01 23:50:50 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:50.718698 | orchestrator | 2025-06-01 23:50:50 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:50.719314 | orchestrator | 2025-06-01 23:50:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:53.764494 | orchestrator | 2025-06-01 23:50:53 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:53.766370 | orchestrator | 2025-06-01 23:50:53 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:53.766462 | orchestrator | 2025-06-01 23:50:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:56.806253 | orchestrator | 2025-06-01 23:50:56 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:56.807940 | orchestrator | 2025-06-01 23:50:56 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:56.808078 | orchestrator | 2025-06-01 23:50:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:50:59.857300 | orchestrator | 2025-06-01 23:50:59 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:50:59.858318 | orchestrator | 2025-06-01 23:50:59 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:50:59.858428 | orchestrator | 2025-06-01 23:50:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:02.906325 | orchestrator | 2025-06-01 23:51:02 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:02.908034 | orchestrator | 2025-06-01 23:51:02 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:02.908976 | orchestrator | 2025-06-01 23:51:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:05.955948 | orchestrator | 2025-06-01 23:51:05 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:05.957613 | orchestrator | 2025-06-01 23:51:05 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:05.957667 | orchestrator | 2025-06-01 23:51:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:09.001039 | orchestrator | 2025-06-01 23:51:08 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:09.002549 | orchestrator | 2025-06-01 23:51:09 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:09.002599 | orchestrator | 2025-06-01 23:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:12.056912 | orchestrator | 2025-06-01 23:51:12 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:12.060656 | orchestrator | 2025-06-01 23:51:12 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:12.060717 | orchestrator | 2025-06-01 23:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:15.104486 | orchestrator | 2025-06-01 23:51:15 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:15.106398 | orchestrator | 2025-06-01 23:51:15 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:15.106431 | orchestrator | 2025-06-01 23:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:18.158555 | orchestrator | 2025-06-01 23:51:18 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:18.163203 | orchestrator | 2025-06-01 23:51:18 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:18.163659 | orchestrator | 2025-06-01 23:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:21.213393 | orchestrator | 2025-06-01 23:51:21 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:21.213503 | orchestrator | 2025-06-01 23:51:21 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:21.213519 | orchestrator | 2025-06-01 23:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:24.263786 | orchestrator | 2025-06-01 23:51:24 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:24.266793 | orchestrator | 2025-06-01 23:51:24 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:24.267261 | orchestrator | 2025-06-01 23:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:27.320234 | orchestrator | 2025-06-01 23:51:27 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:27.323172 | orchestrator | 2025-06-01 23:51:27 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:27.323266 | orchestrator | 2025-06-01 23:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:30.364376 | orchestrator | 2025-06-01 23:51:30 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state STARTED 2025-06-01 23:51:30.365152 | orchestrator | 2025-06-01 23:51:30 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:30.365861 | orchestrator | 2025-06-01 23:51:30 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:30.366173 | orchestrator | 2025-06-01 23:51:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:33.407622 | orchestrator | 2025-06-01 23:51:33 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state STARTED 2025-06-01 23:51:33.410812 | orchestrator | 2025-06-01 23:51:33 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:33.413869 | orchestrator | 2025-06-01 23:51:33 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:33.414242 | orchestrator | 2025-06-01 23:51:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:36.459348 | orchestrator | 2025-06-01 23:51:36 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state STARTED 2025-06-01 23:51:36.463784 | orchestrator | 2025-06-01 23:51:36 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:36.463825 | orchestrator | 2025-06-01 23:51:36 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:36.463838 | orchestrator | 2025-06-01 23:51:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:39.497916 | orchestrator | 2025-06-01 23:51:39 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state STARTED 2025-06-01 23:51:39.498848 | orchestrator | 2025-06-01 23:51:39 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:39.499828 | orchestrator | 2025-06-01 23:51:39 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:39.499870 | orchestrator | 2025-06-01 23:51:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:42.545259 | orchestrator | 2025-06-01 23:51:42 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state STARTED 2025-06-01 23:51:42.545364 | orchestrator | 2025-06-01 23:51:42 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:42.547117 | orchestrator | 2025-06-01 23:51:42 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:42.547185 | orchestrator | 2025-06-01 23:51:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:45.599465 | orchestrator | 2025-06-01 23:51:45 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state STARTED 2025-06-01 23:51:45.601157 | orchestrator | 2025-06-01 23:51:45 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:45.603616 | orchestrator | 2025-06-01 23:51:45 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:45.603909 | orchestrator | 2025-06-01 23:51:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:48.647924 | orchestrator | 2025-06-01 23:51:48 | INFO  | Task 70294b44-8119-40c8-9d65-ca1f19d45195 is in state SUCCESS 2025-06-01 23:51:48.648348 | orchestrator | 2025-06-01 23:51:48 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:48.649733 | orchestrator | 2025-06-01 23:51:48 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:48.649843 | orchestrator | 2025-06-01 23:51:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:51.697193 | orchestrator | 2025-06-01 23:51:51 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:51.700648 | orchestrator | 2025-06-01 23:51:51 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:51.700733 | orchestrator | 2025-06-01 23:51:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:54.738308 | orchestrator | 2025-06-01 23:51:54 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:54.739855 | orchestrator | 2025-06-01 23:51:54 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:54.739893 | orchestrator | 2025-06-01 23:51:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:51:57.782307 | orchestrator | 2025-06-01 23:51:57 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:51:57.783335 | orchestrator | 2025-06-01 23:51:57 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:51:57.783365 | orchestrator | 2025-06-01 23:51:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:00.834428 | orchestrator | 2025-06-01 23:52:00 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:00.836959 | orchestrator | 2025-06-01 23:52:00 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:00.837038 | orchestrator | 2025-06-01 23:52:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:03.882518 | orchestrator | 2025-06-01 23:52:03 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:03.883575 | orchestrator | 2025-06-01 23:52:03 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:03.883609 | orchestrator | 2025-06-01 23:52:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:06.937363 | orchestrator | 2025-06-01 23:52:06 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:06.939348 | orchestrator | 2025-06-01 23:52:06 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:06.939390 | orchestrator | 2025-06-01 23:52:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:09.973412 | orchestrator | 2025-06-01 23:52:09 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:09.976311 | orchestrator | 2025-06-01 23:52:09 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:09.976663 | orchestrator | 2025-06-01 23:52:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:13.023834 | orchestrator | 2025-06-01 23:52:13 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:13.025255 | orchestrator | 2025-06-01 23:52:13 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:13.025594 | orchestrator | 2025-06-01 23:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:16.074387 | orchestrator | 2025-06-01 23:52:16 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:16.076739 | orchestrator | 2025-06-01 23:52:16 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:16.076772 | orchestrator | 2025-06-01 23:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:19.130306 | orchestrator | 2025-06-01 23:52:19 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:19.130409 | orchestrator | 2025-06-01 23:52:19 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:19.130424 | orchestrator | 2025-06-01 23:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:22.168963 | orchestrator | 2025-06-01 23:52:22 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:22.171660 | orchestrator | 2025-06-01 23:52:22 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state STARTED 2025-06-01 23:52:22.171703 | orchestrator | 2025-06-01 23:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:25.215622 | orchestrator | 2025-06-01 23:52:25 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:25.217052 | orchestrator | 2025-06-01 23:52:25 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:25.224844 | orchestrator | 2025-06-01 23:52:25 | INFO  | Task 58530333-d854-4601-902d-e3cb3c0d0d74 is in state SUCCESS 2025-06-01 23:52:25.227667 | orchestrator | 2025-06-01 23:52:25.227705 | orchestrator | None 2025-06-01 23:52:25.227712 | orchestrator | 2025-06-01 23:52:25.227716 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:52:25.227722 | orchestrator | 2025-06-01 23:52:25.227726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:52:25.227730 | orchestrator | Sunday 01 June 2025 23:45:53 +0000 (0:00:00.755) 0:00:00.755 *********** 2025-06-01 23:52:25.227734 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.227740 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.227744 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.227748 | orchestrator | 2025-06-01 23:52:25.227752 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:52:25.227756 | orchestrator | Sunday 01 June 2025 23:45:53 +0000 (0:00:00.730) 0:00:01.486 *********** 2025-06-01 23:52:25.227766 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-01 23:52:25.227770 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-01 23:52:25.227774 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-01 23:52:25.227777 | orchestrator | 2025-06-01 23:52:25.227781 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-01 23:52:25.227785 | orchestrator | 2025-06-01 23:52:25.227789 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-01 23:52:25.227793 | orchestrator | Sunday 01 June 2025 23:45:54 +0000 (0:00:00.766) 0:00:02.253 *********** 2025-06-01 23:52:25.227797 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.227801 | orchestrator | 2025-06-01 23:52:25.227805 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-01 23:52:25.227822 | orchestrator | Sunday 01 June 2025 23:45:55 +0000 (0:00:00.713) 0:00:02.966 *********** 2025-06-01 23:52:25.227826 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.227829 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.227833 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.227837 | orchestrator | 2025-06-01 23:52:25.227841 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-01 23:52:25.227844 | orchestrator | Sunday 01 June 2025 23:45:56 +0000 (0:00:00.937) 0:00:03.903 *********** 2025-06-01 23:52:25.227849 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.227857 | orchestrator | 2025-06-01 23:52:25.227861 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-01 23:52:25.227865 | orchestrator | Sunday 01 June 2025 23:45:57 +0000 (0:00:01.716) 0:00:05.620 *********** 2025-06-01 23:52:25.227869 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.227872 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.227876 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.227880 | orchestrator | 2025-06-01 23:52:25.227883 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-01 23:52:25.227887 | orchestrator | Sunday 01 June 2025 23:45:58 +0000 (0:00:00.729) 0:00:06.350 *********** 2025-06-01 23:52:25.227891 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 23:52:25.227895 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 23:52:25.227899 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 23:52:25.227902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 23:52:25.227906 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 23:52:25.227911 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 23:52:25.227914 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-01 23:52:25.227918 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 23:52:25.227922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 23:52:25.227926 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-01 23:52:25.227929 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-01 23:52:25.227933 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-01 23:52:25.227937 | orchestrator | 2025-06-01 23:52:25.227940 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-01 23:52:25.227945 | orchestrator | Sunday 01 June 2025 23:46:02 +0000 (0:00:03.707) 0:00:10.058 *********** 2025-06-01 23:52:25.227948 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-01 23:52:25.227953 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-01 23:52:25.227956 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-01 23:52:25.227960 | orchestrator | 2025-06-01 23:52:25.227964 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-01 23:52:25.228010 | orchestrator | Sunday 01 June 2025 23:46:03 +0000 (0:00:00.805) 0:00:10.864 *********** 2025-06-01 23:52:25.228015 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-01 23:52:25.228019 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-01 23:52:25.228023 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-01 23:52:25.228027 | orchestrator | 2025-06-01 23:52:25.228030 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-01 23:52:25.228034 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:02.109) 0:00:12.973 *********** 2025-06-01 23:52:25.228042 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-01 23:52:25.228046 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.228057 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-01 23:52:25.228062 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.228066 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-01 23:52:25.228070 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.228073 | orchestrator | 2025-06-01 23:52:25.228077 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-01 23:52:25.228081 | orchestrator | Sunday 01 June 2025 23:46:07 +0000 (0:00:02.247) 0:00:15.221 *********** 2025-06-01 23:52:25.228090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.228134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.228138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.228142 | orchestrator | 2025-06-01 23:52:25.228145 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-01 23:52:25.228149 | orchestrator | Sunday 01 June 2025 23:46:10 +0000 (0:00:02.837) 0:00:18.058 *********** 2025-06-01 23:52:25.228153 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.228157 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.228161 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.228165 | orchestrator | 2025-06-01 23:52:25.228168 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-01 23:52:25.228172 | orchestrator | Sunday 01 June 2025 23:46:11 +0000 (0:00:01.223) 0:00:19.282 *********** 2025-06-01 23:52:25.228176 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-01 23:52:25.228180 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-01 23:52:25.228184 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-01 23:52:25.228187 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-01 23:52:25.228191 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-01 23:52:25.228195 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-01 23:52:25.228200 | orchestrator | 2025-06-01 23:52:25.228206 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-01 23:52:25.228212 | orchestrator | Sunday 01 June 2025 23:46:14 +0000 (0:00:03.323) 0:00:22.605 *********** 2025-06-01 23:52:25.228219 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.228224 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.228230 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.228235 | orchestrator | 2025-06-01 23:52:25.228241 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-01 23:52:25.228247 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:02.440) 0:00:25.046 *********** 2025-06-01 23:52:25.228257 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.228264 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.228270 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.228276 | orchestrator | 2025-06-01 23:52:25.228281 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-01 23:52:25.228285 | orchestrator | Sunday 01 June 2025 23:46:19 +0000 (0:00:02.426) 0:00:27.474 *********** 2025-06-01 23:52:25.228290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.228305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.228312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.228318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 23:52:25.228323 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.228327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.228332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.228339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.228344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 23:52:25.228348 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.228355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.228361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.228366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.228371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 23:52:25.228381 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.228386 | orchestrator | 2025-06-01 23:52:25.228390 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-01 23:52:25.228394 | orchestrator | Sunday 01 June 2025 23:46:21 +0000 (0:00:02.114) 0:00:29.589 *********** 2025-06-01 23:52:25.228399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.228428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 23:52:25.228440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.228449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 23:52:25.228456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.228467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e', '__omit_place_holder__fe8a9683a395cb321d7f587b711300b621c6405e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-01 23:52:25.228477 | orchestrator | 2025-06-01 23:52:25.228481 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-01 23:52:25.228485 | orchestrator | Sunday 01 June 2025 23:46:25 +0000 (0:00:03.199) 0:00:32.788 *********** 2025-06-01 23:52:25.228494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.228529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.228534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.228538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.228543 | orchestrator | 2025-06-01 23:52:25.228547 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-01 23:52:25.228555 | orchestrator | Sunday 01 June 2025 23:46:28 +0000 (0:00:03.691) 0:00:36.480 *********** 2025-06-01 23:52:25.228563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 23:52:25.228567 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 23:52:25.228572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-01 23:52:25.228576 | orchestrator | 2025-06-01 23:52:25.228580 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-01 23:52:25.228584 | orchestrator | Sunday 01 June 2025 23:46:30 +0000 (0:00:02.020) 0:00:38.501 *********** 2025-06-01 23:52:25.228589 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 23:52:25.228593 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 23:52:25.228982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-01 23:52:25.228998 | orchestrator | 2025-06-01 23:52:25.229002 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-01 23:52:25.229006 | orchestrator | Sunday 01 June 2025 23:46:36 +0000 (0:00:05.800) 0:00:44.301 *********** 2025-06-01 23:52:25.229010 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229014 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229018 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229022 | orchestrator | 2025-06-01 23:52:25.229028 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-01 23:52:25.229032 | orchestrator | Sunday 01 June 2025 23:46:37 +0000 (0:00:00.616) 0:00:44.918 *********** 2025-06-01 23:52:25.229036 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 23:52:25.229041 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 23:52:25.229050 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-01 23:52:25.229054 | orchestrator | 2025-06-01 23:52:25.229058 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-01 23:52:25.229062 | orchestrator | Sunday 01 June 2025 23:46:40 +0000 (0:00:02.892) 0:00:47.811 *********** 2025-06-01 23:52:25.229066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 23:52:25.229070 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 23:52:25.229074 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-01 23:52:25.229078 | orchestrator | 2025-06-01 23:52:25.229081 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-01 23:52:25.229085 | orchestrator | Sunday 01 June 2025 23:46:42 +0000 (0:00:02.053) 0:00:49.865 *********** 2025-06-01 23:52:25.229089 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-01 23:52:25.229093 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-01 23:52:25.229097 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-01 23:52:25.229101 | orchestrator | 2025-06-01 23:52:25.229104 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-01 23:52:25.229108 | orchestrator | Sunday 01 June 2025 23:46:44 +0000 (0:00:02.027) 0:00:51.893 *********** 2025-06-01 23:52:25.229112 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-01 23:52:25.229116 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-01 23:52:25.229119 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-01 23:52:25.229123 | orchestrator | 2025-06-01 23:52:25.229127 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-01 23:52:25.229131 | orchestrator | Sunday 01 June 2025 23:46:45 +0000 (0:00:01.585) 0:00:53.478 *********** 2025-06-01 23:52:25.229134 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.229138 | orchestrator | 2025-06-01 23:52:25.229142 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-01 23:52:25.229146 | orchestrator | Sunday 01 June 2025 23:46:46 +0000 (0:00:00.982) 0:00:54.461 *********** 2025-06-01 23:52:25.229150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.229205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.229209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.229217 | orchestrator | 2025-06-01 23:52:25.229221 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-01 23:52:25.229225 | orchestrator | Sunday 01 June 2025 23:46:50 +0000 (0:00:03.336) 0:00:57.797 *********** 2025-06-01 23:52:25.229233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229247 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229263 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229287 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229291 | orchestrator | 2025-06-01 23:52:25.229295 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-01 23:52:25.229299 | orchestrator | Sunday 01 June 2025 23:46:50 +0000 (0:00:00.770) 0:00:58.568 *********** 2025-06-01 23:52:25.229302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229314 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229337 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229353 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229357 | orchestrator | 2025-06-01 23:52:25.229361 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-01 23:52:25.229364 | orchestrator | Sunday 01 June 2025 23:46:52 +0000 (0:00:01.300) 0:00:59.868 *********** 2025-06-01 23:52:25.229371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229388 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229404 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229425 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229428 | orchestrator | 2025-06-01 23:52:25.229432 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-01 23:52:25.229436 | orchestrator | Sunday 01 June 2025 23:46:53 +0000 (0:00:00.880) 0:01:00.749 *********** 2025-06-01 23:52:25.229442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229454 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229473 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229494 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229497 | orchestrator | 2025-06-01 23:52:25.229501 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-01 23:52:25.229505 | orchestrator | Sunday 01 June 2025 23:46:53 +0000 (0:00:00.866) 0:01:01.616 *********** 2025-06-01 23:52:25.229509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229524 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229552 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229582 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229586 | orchestrator | 2025-06-01 23:52:25.229591 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-01 23:52:25.229595 | orchestrator | Sunday 01 June 2025 23:46:55 +0000 (0:00:01.256) 0:01:02.873 *********** 2025-06-01 23:52:25.229599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229618 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229639 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229661 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229665 | orchestrator | 2025-06-01 23:52:25.229670 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-01 23:52:25.229674 | orchestrator | Sunday 01 June 2025 23:46:55 +0000 (0:00:00.704) 0:01:03.577 *********** 2025-06-01 23:52:25.229679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229695 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229722 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229753 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229757 | orchestrator | 2025-06-01 23:52:25.229768 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-01 23:52:25.229772 | orchestrator | Sunday 01 June 2025 23:46:56 +0000 (0:00:01.000) 0:01:04.578 *********** 2025-06-01 23:52:25.229776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229790 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229817 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-01 23:52:25.229826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-01 23:52:25.229831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-01 23:52:25.229835 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229840 | orchestrator | 2025-06-01 23:52:25.229844 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-01 23:52:25.229848 | orchestrator | Sunday 01 June 2025 23:46:59 +0000 (0:00:02.450) 0:01:07.028 *********** 2025-06-01 23:52:25.229852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 23:52:25.229857 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 23:52:25.229863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-01 23:52:25.229868 | orchestrator | 2025-06-01 23:52:25.229876 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-01 23:52:25.229885 | orchestrator | Sunday 01 June 2025 23:47:00 +0000 (0:00:01.438) 0:01:08.467 *********** 2025-06-01 23:52:25.229889 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 23:52:25.229894 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 23:52:25.229898 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-01 23:52:25.229902 | orchestrator | 2025-06-01 23:52:25.229909 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-01 23:52:25.229913 | orchestrator | Sunday 01 June 2025 23:47:02 +0000 (0:00:01.415) 0:01:09.883 *********** 2025-06-01 23:52:25.229918 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 23:52:25.229922 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 23:52:25.229927 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-01 23:52:25.229931 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 23:52:25.229936 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.229939 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 23:52:25.229943 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.229947 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-01 23:52:25.229951 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.229954 | orchestrator | 2025-06-01 23:52:25.229960 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-01 23:52:25.229964 | orchestrator | Sunday 01 June 2025 23:47:03 +0000 (0:00:01.121) 0:01:11.004 *********** 2025-06-01 23:52:25.229982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-01 23:52:25.229999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.230006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.230010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-01 23:52:25.230056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.230062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.230066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-01 23:52:25.230069 | orchestrator | 2025-06-01 23:52:25.230073 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-01 23:52:25.230077 | orchestrator | Sunday 01 June 2025 23:47:05 +0000 (0:00:02.652) 0:01:13.657 *********** 2025-06-01 23:52:25.230081 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.230085 | orchestrator | 2025-06-01 23:52:25.230089 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-01 23:52:25.230097 | orchestrator | Sunday 01 June 2025 23:47:06 +0000 (0:00:00.882) 0:01:14.539 *********** 2025-06-01 23:52:25.230107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 23:52:25.230114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.230119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.230123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.230127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 23:52:25.230131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.230137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-01 23:52:25.231268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.231280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231341 | orchestrator | 2025-06-01 23:52:25.231362 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-01 23:52:25.231380 | orchestrator | Sunday 01 June 2025 23:47:10 +0000 (0:00:03.823) 0:01:18.363 *********** 2025-06-01 23:52:25.231398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 23:52:25.231436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.231463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231485 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.231496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 23:52:25.231506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.231525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-01 23:52:25.231571 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.231581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.231591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.231618 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.231628 | orchestrator | 2025-06-01 23:52:25.231638 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-01 23:52:25.231648 | orchestrator | Sunday 01 June 2025 23:47:11 +0000 (0:00:00.962) 0:01:19.325 *********** 2025-06-01 23:52:25.231660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 23:52:25.231677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 23:52:25.231696 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.231713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 23:52:25.231730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-01 23:52:25.231746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 23:52:25.231762 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.231780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-01 23:52:25.231798 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.231815 | orchestrator | 2025-06-01 23:52:25.231840 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-01 23:52:25.231859 | orchestrator | Sunday 01 June 2025 23:47:12 +0000 (0:00:01.184) 0:01:20.509 *********** 2025-06-01 23:52:25.231874 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.231890 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.231907 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.231923 | orchestrator | 2025-06-01 23:52:25.231937 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-01 23:52:25.231947 | orchestrator | Sunday 01 June 2025 23:47:14 +0000 (0:00:01.667) 0:01:22.177 *********** 2025-06-01 23:52:25.231962 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.232006 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.232023 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.232039 | orchestrator | 2025-06-01 23:52:25.232053 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-01 23:52:25.232063 | orchestrator | Sunday 01 June 2025 23:47:16 +0000 (0:00:02.399) 0:01:24.577 *********** 2025-06-01 23:52:25.232073 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.232082 | orchestrator | 2025-06-01 23:52:25.232092 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-01 23:52:25.232102 | orchestrator | Sunday 01 June 2025 23:47:17 +0000 (0:00:00.865) 0:01:25.442 *********** 2025-06-01 23:52:25.232114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.232135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.232212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.232249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232270 | orchestrator | 2025-06-01 23:52:25.232280 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-01 23:52:25.232290 | orchestrator | Sunday 01 June 2025 23:47:23 +0000 (0:00:06.016) 0:01:31.459 *********** 2025-06-01 23:52:25.232307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.232322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232348 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.232358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.232369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.232384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232427 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.232437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.232447 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.232457 | orchestrator | 2025-06-01 23:52:25.232467 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-01 23:52:25.232477 | orchestrator | Sunday 01 June 2025 23:47:24 +0000 (0:00:00.746) 0:01:32.206 *********** 2025-06-01 23:52:25.232486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 23:52:25.232502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 23:52:25.232519 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.232535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 23:52:25.232550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 23:52:25.232565 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.232579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 23:52:25.232593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-01 23:52:25.232608 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.232621 | orchestrator | 2025-06-01 23:52:25.232635 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-01 23:52:25.232651 | orchestrator | Sunday 01 June 2025 23:47:25 +0000 (0:00:00.838) 0:01:33.044 *********** 2025-06-01 23:52:25.232665 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.232681 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.232695 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.232709 | orchestrator | 2025-06-01 23:52:25.232723 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-01 23:52:25.232738 | orchestrator | Sunday 01 June 2025 23:47:29 +0000 (0:00:04.001) 0:01:37.046 *********** 2025-06-01 23:52:25.232753 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.232769 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.232782 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.232817 | orchestrator | 2025-06-01 23:52:25.232844 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-01 23:52:25.232861 | orchestrator | Sunday 01 June 2025 23:47:31 +0000 (0:00:01.798) 0:01:38.844 *********** 2025-06-01 23:52:25.232876 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.232892 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.232906 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.232922 | orchestrator | 2025-06-01 23:52:25.232938 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-01 23:52:25.232954 | orchestrator | Sunday 01 June 2025 23:47:31 +0000 (0:00:00.484) 0:01:39.329 *********** 2025-06-01 23:52:25.233038 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.233067 | orchestrator | 2025-06-01 23:52:25.233100 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-01 23:52:25.233123 | orchestrator | Sunday 01 June 2025 23:47:32 +0000 (0:00:00.676) 0:01:40.006 *********** 2025-06-01 23:52:25.233146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 23:52:25.233164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 23:52:25.233183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-01 23:52:25.233199 | orchestrator | 2025-06-01 23:52:25.233216 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-01 23:52:25.233233 | orchestrator | Sunday 01 June 2025 23:47:35 +0000 (0:00:03.529) 0:01:43.535 *********** 2025-06-01 23:52:25.233261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 23:52:25.233284 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.233300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 23:52:25.233311 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.233321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-01 23:52:25.233331 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.233346 | orchestrator | 2025-06-01 23:52:25.233362 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-01 23:52:25.233378 | orchestrator | Sunday 01 June 2025 23:47:38 +0000 (0:00:02.732) 0:01:46.267 *********** 2025-06-01 23:52:25.233398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 23:52:25.233423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 23:52:25.233441 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.233459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 23:52:25.233488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 23:52:25.233505 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.233532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 23:52:25.233550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-01 23:52:25.233575 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.233592 | orchestrator | 2025-06-01 23:52:25.233609 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-01 23:52:25.233625 | orchestrator | Sunday 01 June 2025 23:47:41 +0000 (0:00:02.721) 0:01:48.989 *********** 2025-06-01 23:52:25.233642 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.233658 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.233675 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.233691 | orchestrator | 2025-06-01 23:52:25.233708 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-01 23:52:25.233725 | orchestrator | Sunday 01 June 2025 23:47:42 +0000 (0:00:00.916) 0:01:49.906 *********** 2025-06-01 23:52:25.233741 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.233758 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.233775 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.233792 | orchestrator | 2025-06-01 23:52:25.233809 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-01 23:52:25.233825 | orchestrator | Sunday 01 June 2025 23:47:43 +0000 (0:00:01.274) 0:01:51.181 *********** 2025-06-01 23:52:25.233842 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.233858 | orchestrator | 2025-06-01 23:52:25.233875 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-01 23:52:25.233891 | orchestrator | Sunday 01 June 2025 23:47:44 +0000 (0:00:00.971) 0:01:52.152 *********** 2025-06-01 23:52:25.233908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.233964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.234200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.234230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234349 | orchestrator | 2025-06-01 23:52:25.234366 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-01 23:52:25.234382 | orchestrator | Sunday 01 June 2025 23:47:50 +0000 (0:00:05.633) 0:01:57.786 *********** 2025-06-01 23:52:25.234399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.234417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234496 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.234506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.234525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.234535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234603 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.234613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.234623 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.234633 | orchestrator | 2025-06-01 23:52:25.234659 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-01 23:52:25.234669 | orchestrator | Sunday 01 June 2025 23:47:51 +0000 (0:00:01.237) 0:01:59.024 *********** 2025-06-01 23:52:25.234679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 23:52:25.234705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 23:52:25.234716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 23:52:25.234738 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.234757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 23:52:25.234768 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.234778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 23:52:25.234788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-01 23:52:25.234798 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.234807 | orchestrator | 2025-06-01 23:52:25.234817 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-01 23:52:25.234834 | orchestrator | Sunday 01 June 2025 23:47:52 +0000 (0:00:01.287) 0:02:00.311 *********** 2025-06-01 23:52:25.234852 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.234870 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.234887 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.234901 | orchestrator | 2025-06-01 23:52:25.234912 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-01 23:52:25.234921 | orchestrator | Sunday 01 June 2025 23:47:54 +0000 (0:00:01.757) 0:02:02.068 *********** 2025-06-01 23:52:25.234930 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.234940 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.234949 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.234959 | orchestrator | 2025-06-01 23:52:25.234993 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-01 23:52:25.235011 | orchestrator | Sunday 01 June 2025 23:47:56 +0000 (0:00:02.255) 0:02:04.324 *********** 2025-06-01 23:52:25.235021 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.235030 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.235040 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.235049 | orchestrator | 2025-06-01 23:52:25.235058 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-01 23:52:25.235068 | orchestrator | Sunday 01 June 2025 23:47:57 +0000 (0:00:00.495) 0:02:04.819 *********** 2025-06-01 23:52:25.235078 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.235087 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.235097 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.235106 | orchestrator | 2025-06-01 23:52:25.235116 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-01 23:52:25.235125 | orchestrator | Sunday 01 June 2025 23:47:57 +0000 (0:00:00.306) 0:02:05.126 *********** 2025-06-01 23:52:25.235135 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.235145 | orchestrator | 2025-06-01 23:52:25.235154 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-01 23:52:25.235167 | orchestrator | Sunday 01 June 2025 23:47:58 +0000 (0:00:00.751) 0:02:05.878 *********** 2025-06-01 23:52:25.235185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:52:25.235211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:52:25.235227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:52:25.235246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:52:25.235267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-01 23:52:25.235402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:52:25.235412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235482 | orchestrator | 2025-06-01 23:52:25.235498 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-01 23:52:25.235507 | orchestrator | Sunday 01 June 2025 23:48:01 +0000 (0:00:03.731) 0:02:09.610 *********** 2025-06-01 23:52:25.235529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:52:25.235541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:52:25.235551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235612 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.235626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:52:25.235636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:52:25.235646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-01 23:52:25.235702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-01 23:52:25.235737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235752 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.235763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.235871 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.235886 | orchestrator | 2025-06-01 23:52:25.235902 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-01 23:52:25.235917 | orchestrator | Sunday 01 June 2025 23:48:02 +0000 (0:00:00.801) 0:02:10.411 *********** 2025-06-01 23:52:25.235936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 23:52:25.235953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 23:52:25.235994 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.236006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 23:52:25.236017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 23:52:25.236027 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.236036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-01 23:52:25.236046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-01 23:52:25.236055 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.236065 | orchestrator | 2025-06-01 23:52:25.236075 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-01 23:52:25.236094 | orchestrator | Sunday 01 June 2025 23:48:03 +0000 (0:00:00.962) 0:02:11.373 *********** 2025-06-01 23:52:25.236103 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.236113 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.236123 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.236132 | orchestrator | 2025-06-01 23:52:25.236142 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-01 23:52:25.236151 | orchestrator | Sunday 01 June 2025 23:48:05 +0000 (0:00:01.768) 0:02:13.141 *********** 2025-06-01 23:52:25.236161 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.236170 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.236180 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.236189 | orchestrator | 2025-06-01 23:52:25.236199 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-01 23:52:25.236208 | orchestrator | Sunday 01 June 2025 23:48:07 +0000 (0:00:01.975) 0:02:15.117 *********** 2025-06-01 23:52:25.236218 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.236228 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.236237 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.236247 | orchestrator | 2025-06-01 23:52:25.236257 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-01 23:52:25.236266 | orchestrator | Sunday 01 June 2025 23:48:07 +0000 (0:00:00.319) 0:02:15.437 *********** 2025-06-01 23:52:25.236276 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.236285 | orchestrator | 2025-06-01 23:52:25.236295 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-01 23:52:25.236304 | orchestrator | Sunday 01 June 2025 23:48:08 +0000 (0:00:00.795) 0:02:16.233 *********** 2025-06-01 23:52:25.236332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:52:25.236345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.236373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:52:25.236386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.236422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-01 23:52:25.236442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.236460 | orchestrator | 2025-06-01 23:52:25.236469 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-01 23:52:25.236479 | orchestrator | Sunday 01 June 2025 23:48:12 +0000 (0:00:04.110) 0:02:20.343 *********** 2025-06-01 23:52:25.236521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:52:25.236549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.236577 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.236592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:52:25.236615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.236627 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.236644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-01 23:52:25.236666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.236677 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.236687 | orchestrator | 2025-06-01 23:52:25.236697 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-01 23:52:25.236714 | orchestrator | Sunday 01 June 2025 23:48:15 +0000 (0:00:02.755) 0:02:23.099 *********** 2025-06-01 23:52:25.236731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 23:52:25.236755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 23:52:25.236766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 23:52:25.236775 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.236786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 23:52:25.236796 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.236806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 23:52:25.236823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-01 23:52:25.236833 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.236843 | orchestrator | 2025-06-01 23:52:25.236853 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-01 23:52:25.236862 | orchestrator | Sunday 01 June 2025 23:48:18 +0000 (0:00:03.243) 0:02:26.342 *********** 2025-06-01 23:52:25.236876 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.236886 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.236896 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.236905 | orchestrator | 2025-06-01 23:52:25.236915 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-01 23:52:25.236924 | orchestrator | Sunday 01 June 2025 23:48:20 +0000 (0:00:01.549) 0:02:27.891 *********** 2025-06-01 23:52:25.236943 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.236953 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.236962 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.236994 | orchestrator | 2025-06-01 23:52:25.237004 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-01 23:52:25.237014 | orchestrator | Sunday 01 June 2025 23:48:22 +0000 (0:00:02.049) 0:02:29.940 *********** 2025-06-01 23:52:25.237023 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.237033 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.237047 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.237063 | orchestrator | 2025-06-01 23:52:25.237080 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-01 23:52:25.237094 | orchestrator | Sunday 01 June 2025 23:48:22 +0000 (0:00:00.311) 0:02:30.252 *********** 2025-06-01 23:52:25.237103 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.237113 | orchestrator | 2025-06-01 23:52:25.237122 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-01 23:52:25.237132 | orchestrator | Sunday 01 June 2025 23:48:23 +0000 (0:00:00.860) 0:02:31.112 *********** 2025-06-01 23:52:25.237142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:52:25.237153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:52:25.237164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-01 23:52:25.237174 | orchestrator | 2025-06-01 23:52:25.237184 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-01 23:52:25.237193 | orchestrator | Sunday 01 June 2025 23:48:26 +0000 (0:00:03.209) 0:02:34.321 *********** 2025-06-01 23:52:25.237210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:52:25.237232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:52:25.237243 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.237252 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.237262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-01 23:52:25.237272 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.237281 | orchestrator | 2025-06-01 23:52:25.237291 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-01 23:52:25.237300 | orchestrator | Sunday 01 June 2025 23:48:26 +0000 (0:00:00.399) 0:02:34.721 *********** 2025-06-01 23:52:25.237310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 23:52:25.237320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 23:52:25.237329 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.237346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 23:52:25.237363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 23:52:25.237378 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.237389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-01 23:52:25.237399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-01 23:52:25.237409 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.237418 | orchestrator | 2025-06-01 23:52:25.237428 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-01 23:52:25.237438 | orchestrator | Sunday 01 June 2025 23:48:27 +0000 (0:00:00.657) 0:02:35.378 *********** 2025-06-01 23:52:25.237447 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.237458 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.237475 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.237501 | orchestrator | 2025-06-01 23:52:25.237517 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-01 23:52:25.237532 | orchestrator | Sunday 01 June 2025 23:48:29 +0000 (0:00:01.720) 0:02:37.098 *********** 2025-06-01 23:52:25.237548 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.237565 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.237582 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.237597 | orchestrator | 2025-06-01 23:52:25.237610 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-01 23:52:25.237626 | orchestrator | Sunday 01 June 2025 23:48:31 +0000 (0:00:02.021) 0:02:39.119 *********** 2025-06-01 23:52:25.237642 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.237658 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.237683 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.237699 | orchestrator | 2025-06-01 23:52:25.237716 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-01 23:52:25.237732 | orchestrator | Sunday 01 June 2025 23:48:31 +0000 (0:00:00.332) 0:02:39.452 *********** 2025-06-01 23:52:25.237748 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.237766 | orchestrator | 2025-06-01 23:52:25.237782 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-01 23:52:25.237792 | orchestrator | Sunday 01 June 2025 23:48:32 +0000 (0:00:01.033) 0:02:40.485 *********** 2025-06-01 23:52:25.237810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:52:25.237836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:52:25.237862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:52:25.237879 | orchestrator | 2025-06-01 23:52:25.237895 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-01 23:52:25.237918 | orchestrator | Sunday 01 June 2025 23:48:37 +0000 (0:00:04.461) 0:02:44.947 *********** 2025-06-01 23:52:25.237952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:52:25.238064 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.238083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:52:25.238104 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.238130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:52:25.238147 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.238164 | orchestrator | 2025-06-01 23:52:25.238181 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-01 23:52:25.238194 | orchestrator | Sunday 01 June 2025 23:48:38 +0000 (0:00:00.850) 0:02:45.798 *********** 2025-06-01 23:52:25.238205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 23:52:25.238215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 23:52:25.238226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 23:52:25.238244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 23:52:25.238255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 23:52:25.238265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 23:52:25.238275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 23:52:25.238285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 23:52:25.238301 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.238311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 23:52:25.238321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 23:52:25.238331 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.238346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 23:52:25.238356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 23:52:25.238366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-01 23:52:25.238376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-01 23:52:25.238386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-01 23:52:25.238395 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.238412 | orchestrator | 2025-06-01 23:52:25.238422 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-01 23:52:25.238431 | orchestrator | Sunday 01 June 2025 23:48:39 +0000 (0:00:01.242) 0:02:47.040 *********** 2025-06-01 23:52:25.238441 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.238451 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.238460 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.238468 | orchestrator | 2025-06-01 23:52:25.238476 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-01 23:52:25.238483 | orchestrator | Sunday 01 June 2025 23:48:41 +0000 (0:00:01.814) 0:02:48.854 *********** 2025-06-01 23:52:25.238491 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.238499 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.238507 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.238515 | orchestrator | 2025-06-01 23:52:25.238523 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-01 23:52:25.238530 | orchestrator | Sunday 01 June 2025 23:48:43 +0000 (0:00:02.205) 0:02:51.060 *********** 2025-06-01 23:52:25.238538 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.238546 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.238554 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.238562 | orchestrator | 2025-06-01 23:52:25.238570 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-01 23:52:25.238578 | orchestrator | Sunday 01 June 2025 23:48:43 +0000 (0:00:00.315) 0:02:51.375 *********** 2025-06-01 23:52:25.238586 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.238594 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.238601 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.238609 | orchestrator | 2025-06-01 23:52:25.238617 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-01 23:52:25.238625 | orchestrator | Sunday 01 June 2025 23:48:43 +0000 (0:00:00.346) 0:02:51.722 *********** 2025-06-01 23:52:25.238633 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.238646 | orchestrator | 2025-06-01 23:52:25.238660 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-01 23:52:25.238673 | orchestrator | Sunday 01 June 2025 23:48:45 +0000 (0:00:01.244) 0:02:52.967 *********** 2025-06-01 23:52:25.238704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:52:25.238726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:52:25.238754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:52:25.238765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:52:25.238773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:52:25.238782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:52:25.238801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:52:25.238811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:52:25.238825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:52:25.238833 | orchestrator | 2025-06-01 23:52:25.238841 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-01 23:52:25.238849 | orchestrator | Sunday 01 June 2025 23:48:48 +0000 (0:00:03.773) 0:02:56.740 *********** 2025-06-01 23:52:25.238858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:52:25.238867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:52:25.238880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:52:25.238888 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.238901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:52:25.238915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:52:25.238923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:52:25.238931 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.238940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:52:25.238953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:52:25.238965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:52:25.239000 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.239010 | orchestrator | 2025-06-01 23:52:25.239018 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-01 23:52:25.239026 | orchestrator | Sunday 01 June 2025 23:48:49 +0000 (0:00:00.578) 0:02:57.318 *********** 2025-06-01 23:52:25.239034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 23:52:25.239043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 23:52:25.239051 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.239059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 23:52:25.239068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 23:52:25.239076 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.239084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 23:52:25.239092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-01 23:52:25.239100 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.239108 | orchestrator | 2025-06-01 23:52:25.239116 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-01 23:52:25.239123 | orchestrator | Sunday 01 June 2025 23:48:50 +0000 (0:00:01.165) 0:02:58.483 *********** 2025-06-01 23:52:25.239131 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.239139 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.239147 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.239155 | orchestrator | 2025-06-01 23:52:25.239163 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-01 23:52:25.239171 | orchestrator | Sunday 01 June 2025 23:48:52 +0000 (0:00:01.310) 0:02:59.794 *********** 2025-06-01 23:52:25.239179 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.239187 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.239195 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.239202 | orchestrator | 2025-06-01 23:52:25.239210 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-01 23:52:25.239218 | orchestrator | Sunday 01 June 2025 23:48:54 +0000 (0:00:01.973) 0:03:01.767 *********** 2025-06-01 23:52:25.239226 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.239234 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.239243 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.239257 | orchestrator | 2025-06-01 23:52:25.239270 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-01 23:52:25.239289 | orchestrator | Sunday 01 June 2025 23:48:54 +0000 (0:00:00.292) 0:03:02.060 *********** 2025-06-01 23:52:25.239298 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.239306 | orchestrator | 2025-06-01 23:52:25.239314 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-01 23:52:25.239322 | orchestrator | Sunday 01 June 2025 23:48:55 +0000 (0:00:01.218) 0:03:03.278 *********** 2025-06-01 23:52:25.239341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:52:25.239354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.239369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:52:25.239384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.239405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-01 23:52:25.239436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.239447 | orchestrator | 2025-06-01 23:52:25.239455 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-01 23:52:25.239465 | orchestrator | Sunday 01 June 2025 23:48:59 +0000 (0:00:03.532) 0:03:06.811 *********** 2025-06-01 23:52:25.239479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:52:25.239493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.239507 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.239521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:52:25.239551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-01 23:52:25.239570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.239586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.239598 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.239606 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.239614 | orchestrator | 2025-06-01 23:52:25.239622 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-01 23:52:25.239630 | orchestrator | Sunday 01 June 2025 23:48:59 +0000 (0:00:00.651) 0:03:07.463 *********** 2025-06-01 23:52:25.239639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 23:52:25.239647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 23:52:25.239656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 23:52:25.239664 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.239672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 23:52:25.239685 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.239693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-01 23:52:25.239701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-01 23:52:25.239709 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.239717 | orchestrator | 2025-06-01 23:52:25.239725 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-01 23:52:25.239733 | orchestrator | Sunday 01 June 2025 23:49:01 +0000 (0:00:01.549) 0:03:09.013 *********** 2025-06-01 23:52:25.239741 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.239748 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.239756 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.239764 | orchestrator | 2025-06-01 23:52:25.239772 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-01 23:52:25.239780 | orchestrator | Sunday 01 June 2025 23:49:02 +0000 (0:00:01.285) 0:03:10.298 *********** 2025-06-01 23:52:25.239788 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.239795 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.239808 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.239820 | orchestrator | 2025-06-01 23:52:25.239833 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-01 23:52:25.239848 | orchestrator | Sunday 01 June 2025 23:49:04 +0000 (0:00:02.114) 0:03:12.413 *********** 2025-06-01 23:52:25.239867 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.239881 | orchestrator | 2025-06-01 23:52:25.239893 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-01 23:52:25.239905 | orchestrator | Sunday 01 June 2025 23:49:05 +0000 (0:00:01.074) 0:03:13.487 *********** 2025-06-01 23:52:25.239924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 23:52:25.239940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 23:52:25.239954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-01 23:52:25.239998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240096 | orchestrator | 2025-06-01 23:52:25.240105 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-01 23:52:25.240113 | orchestrator | Sunday 01 June 2025 23:49:09 +0000 (0:00:04.084) 0:03:17.572 *********** 2025-06-01 23:52:25.240125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 23:52:25.240133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240163 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.240172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 23:52:25.240185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240218 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.240227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-01 23:52:25.240235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.240266 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.240274 | orchestrator | 2025-06-01 23:52:25.240282 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-01 23:52:25.240290 | orchestrator | Sunday 01 June 2025 23:49:10 +0000 (0:00:00.764) 0:03:18.336 *********** 2025-06-01 23:52:25.240298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 23:52:25.240310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 23:52:25.240318 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.240326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 23:52:25.240334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 23:52:25.240346 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.240354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-01 23:52:25.240362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-01 23:52:25.240370 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.240378 | orchestrator | 2025-06-01 23:52:25.240386 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-01 23:52:25.240394 | orchestrator | Sunday 01 June 2025 23:49:11 +0000 (0:00:00.872) 0:03:19.209 *********** 2025-06-01 23:52:25.240401 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.240409 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.240417 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.240425 | orchestrator | 2025-06-01 23:52:25.240433 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-01 23:52:25.240441 | orchestrator | Sunday 01 June 2025 23:49:13 +0000 (0:00:01.720) 0:03:20.930 *********** 2025-06-01 23:52:25.240448 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.240456 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.240464 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.240472 | orchestrator | 2025-06-01 23:52:25.240480 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-01 23:52:25.240487 | orchestrator | Sunday 01 June 2025 23:49:15 +0000 (0:00:02.227) 0:03:23.157 *********** 2025-06-01 23:52:25.240495 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.240503 | orchestrator | 2025-06-01 23:52:25.240511 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-01 23:52:25.240518 | orchestrator | Sunday 01 June 2025 23:49:16 +0000 (0:00:01.105) 0:03:24.263 *********** 2025-06-01 23:52:25.240526 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:52:25.240534 | orchestrator | 2025-06-01 23:52:25.240542 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-01 23:52:25.240550 | orchestrator | Sunday 01 June 2025 23:49:19 +0000 (0:00:03.094) 0:03:27.358 *********** 2025-06-01 23:52:25.240567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:52:25.240582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 23:52:25.240591 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.240600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:52:25.240609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 23:52:25.240617 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.240632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:52:25.240648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 23:52:25.240661 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.240674 | orchestrator | 2025-06-01 23:52:25.240688 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-01 23:52:25.240701 | orchestrator | Sunday 01 June 2025 23:49:22 +0000 (0:00:02.785) 0:03:30.144 *********** 2025-06-01 23:52:25.240735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:52:25.240766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 23:52:25.240779 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.240788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:52:25.240797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 23:52:25.240806 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.240826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:52:25.240840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-01 23:52:25.240848 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.240856 | orchestrator | 2025-06-01 23:52:25.240864 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-01 23:52:25.240872 | orchestrator | Sunday 01 June 2025 23:49:24 +0000 (0:00:02.186) 0:03:32.330 *********** 2025-06-01 23:52:25.240880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 23:52:25.240889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 23:52:25.240897 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.240905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 23:52:25.240914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 23:52:25.240927 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 23:52:25.241175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-01 23:52:25.241184 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241192 | orchestrator | 2025-06-01 23:52:25.241200 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-01 23:52:25.241209 | orchestrator | Sunday 01 June 2025 23:49:27 +0000 (0:00:02.494) 0:03:34.825 *********** 2025-06-01 23:52:25.241217 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.241225 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.241233 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.241240 | orchestrator | 2025-06-01 23:52:25.241248 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-01 23:52:25.241256 | orchestrator | Sunday 01 June 2025 23:49:28 +0000 (0:00:01.918) 0:03:36.743 *********** 2025-06-01 23:52:25.241264 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241271 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241278 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241285 | orchestrator | 2025-06-01 23:52:25.241292 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-01 23:52:25.241298 | orchestrator | Sunday 01 June 2025 23:49:30 +0000 (0:00:01.447) 0:03:38.190 *********** 2025-06-01 23:52:25.241305 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241312 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241318 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241325 | orchestrator | 2025-06-01 23:52:25.241332 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-01 23:52:25.241338 | orchestrator | Sunday 01 June 2025 23:49:30 +0000 (0:00:00.337) 0:03:38.528 *********** 2025-06-01 23:52:25.241345 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.241352 | orchestrator | 2025-06-01 23:52:25.241358 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-01 23:52:25.241365 | orchestrator | Sunday 01 June 2025 23:49:31 +0000 (0:00:01.117) 0:03:39.646 *********** 2025-06-01 23:52:25.241372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 23:52:25.241387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 23:52:25.241401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-01 23:52:25.241408 | orchestrator | 2025-06-01 23:52:25.241419 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-01 23:52:25.241426 | orchestrator | Sunday 01 June 2025 23:49:33 +0000 (0:00:01.674) 0:03:41.320 *********** 2025-06-01 23:52:25.241433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 23:52:25.241443 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 23:52:25.241468 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-01 23:52:25.241498 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241509 | orchestrator | 2025-06-01 23:52:25.241521 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-01 23:52:25.241532 | orchestrator | Sunday 01 June 2025 23:49:33 +0000 (0:00:00.391) 0:03:41.711 *********** 2025-06-01 23:52:25.241545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 23:52:25.241558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 23:52:25.241570 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241581 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-01 23:52:25.241608 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241619 | orchestrator | 2025-06-01 23:52:25.241629 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-01 23:52:25.241639 | orchestrator | Sunday 01 June 2025 23:49:34 +0000 (0:00:00.584) 0:03:42.296 *********** 2025-06-01 23:52:25.241650 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241661 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241672 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241684 | orchestrator | 2025-06-01 23:52:25.241695 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-01 23:52:25.241706 | orchestrator | Sunday 01 June 2025 23:49:35 +0000 (0:00:00.713) 0:03:43.010 *********** 2025-06-01 23:52:25.241718 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241725 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241731 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241738 | orchestrator | 2025-06-01 23:52:25.241744 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-01 23:52:25.241751 | orchestrator | Sunday 01 June 2025 23:49:36 +0000 (0:00:01.262) 0:03:44.273 *********** 2025-06-01 23:52:25.241757 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.241764 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.241770 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.241780 | orchestrator | 2025-06-01 23:52:25.241790 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-01 23:52:25.241800 | orchestrator | Sunday 01 June 2025 23:49:36 +0000 (0:00:00.316) 0:03:44.589 *********** 2025-06-01 23:52:25.241811 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.241822 | orchestrator | 2025-06-01 23:52:25.241833 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-01 23:52:25.241845 | orchestrator | Sunday 01 June 2025 23:49:38 +0000 (0:00:01.462) 0:03:46.051 *********** 2025-06-01 23:52:25.241864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:52:25.241876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.241883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.241896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.241909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 23:52:25.241927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.241940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.241952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.241965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.242131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:52:25.242191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-01 23:52:25.242236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 23:52:25.242284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 23:52:25.242317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.242554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.242566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242595 | orchestrator | 2025-06-01 23:52:25.242602 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-01 23:52:25.242609 | orchestrator | Sunday 01 June 2025 23:49:43 +0000 (0:00:04.715) 0:03:50.767 *********** 2025-06-01 23:52:25.242620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:52:25.242636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 23:52:25.242664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:52:25.242706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 23:52:25.242775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.242823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.242868 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.242875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-01 23:52:25.242882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.242929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.242947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-01 23:52:25.242961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.243016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.243026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.243033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.243040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.243058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.243069 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.243087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.243101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.243108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.243115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-01 23:52:25.243122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-01 23:52:25.243133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.243144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-01 23:52:25.243155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-01 23:52:25.243162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.243169 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.243176 | orchestrator | 2025-06-01 23:52:25.243182 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-01 23:52:25.243189 | orchestrator | Sunday 01 June 2025 23:49:44 +0000 (0:00:01.632) 0:03:52.400 *********** 2025-06-01 23:52:25.243196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 23:52:25.243207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 23:52:25.243218 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.243235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 23:52:25.243251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 23:52:25.243266 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.243277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-01 23:52:25.243289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-01 23:52:25.243300 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.243311 | orchestrator | 2025-06-01 23:52:25.243322 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-01 23:52:25.243333 | orchestrator | Sunday 01 June 2025 23:49:46 +0000 (0:00:02.012) 0:03:54.412 *********** 2025-06-01 23:52:25.243345 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.243355 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.243367 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.243378 | orchestrator | 2025-06-01 23:52:25.243389 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-01 23:52:25.243398 | orchestrator | Sunday 01 June 2025 23:49:47 +0000 (0:00:01.315) 0:03:55.728 *********** 2025-06-01 23:52:25.243410 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.243422 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.243434 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.243445 | orchestrator | 2025-06-01 23:52:25.243456 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-01 23:52:25.243467 | orchestrator | Sunday 01 June 2025 23:49:49 +0000 (0:00:01.967) 0:03:57.695 *********** 2025-06-01 23:52:25.243478 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.243489 | orchestrator | 2025-06-01 23:52:25.243501 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-01 23:52:25.243512 | orchestrator | Sunday 01 June 2025 23:49:51 +0000 (0:00:01.233) 0:03:58.928 *********** 2025-06-01 23:52:25.243537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.243550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.243572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.243583 | orchestrator | 2025-06-01 23:52:25.243593 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-01 23:52:25.243604 | orchestrator | Sunday 01 June 2025 23:49:54 +0000 (0:00:03.637) 0:04:02.566 *********** 2025-06-01 23:52:25.243615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.243625 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.243642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.243654 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.243669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.243686 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.243697 | orchestrator | 2025-06-01 23:52:25.243707 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-01 23:52:25.243717 | orchestrator | Sunday 01 June 2025 23:49:55 +0000 (0:00:00.519) 0:04:03.086 *********** 2025-06-01 23:52:25.243728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 23:52:25.243738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 23:52:25.243749 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.243759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 23:52:25.243770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 23:52:25.243780 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.243791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 23:52:25.243801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-01 23:52:25.243811 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.243822 | orchestrator | 2025-06-01 23:52:25.243831 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-01 23:52:25.243842 | orchestrator | Sunday 01 June 2025 23:49:56 +0000 (0:00:00.726) 0:04:03.812 *********** 2025-06-01 23:52:25.243852 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.243863 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.243873 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.243883 | orchestrator | 2025-06-01 23:52:25.243893 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-01 23:52:25.243903 | orchestrator | Sunday 01 June 2025 23:49:57 +0000 (0:00:01.599) 0:04:05.411 *********** 2025-06-01 23:52:25.243914 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.243924 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.243934 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.243945 | orchestrator | 2025-06-01 23:52:25.243955 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-01 23:52:25.243966 | orchestrator | Sunday 01 June 2025 23:49:59 +0000 (0:00:01.879) 0:04:07.290 *********** 2025-06-01 23:52:25.243994 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.244004 | orchestrator | 2025-06-01 23:52:25.244014 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-01 23:52:25.244024 | orchestrator | Sunday 01 June 2025 23:50:00 +0000 (0:00:01.330) 0:04:08.621 *********** 2025-06-01 23:52:25.244050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.244073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.244112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.244158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244179 | orchestrator | 2025-06-01 23:52:25.244190 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-01 23:52:25.244200 | orchestrator | Sunday 01 June 2025 23:50:05 +0000 (0:00:04.501) 0:04:13.122 *********** 2025-06-01 23:52:25.244377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.244410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244433 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.244514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.244538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244563 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.244605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.244619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.244641 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.244647 | orchestrator | 2025-06-01 23:52:25.244653 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-01 23:52:25.244660 | orchestrator | Sunday 01 June 2025 23:50:06 +0000 (0:00:01.063) 0:04:14.185 *********** 2025-06-01 23:52:25.244667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244700 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.244707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244760 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.244770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-01 23:52:25.244789 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.244795 | orchestrator | 2025-06-01 23:52:25.244802 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-01 23:52:25.244808 | orchestrator | Sunday 01 June 2025 23:50:07 +0000 (0:00:00.879) 0:04:15.065 *********** 2025-06-01 23:52:25.244814 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.244820 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.244826 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.244834 | orchestrator | 2025-06-01 23:52:25.244846 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-01 23:52:25.244857 | orchestrator | Sunday 01 June 2025 23:50:08 +0000 (0:00:01.625) 0:04:16.690 *********** 2025-06-01 23:52:25.244867 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.244874 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.244880 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.244886 | orchestrator | 2025-06-01 23:52:25.244892 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-01 23:52:25.244898 | orchestrator | Sunday 01 June 2025 23:50:11 +0000 (0:00:02.082) 0:04:18.773 *********** 2025-06-01 23:52:25.244904 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.244910 | orchestrator | 2025-06-01 23:52:25.244916 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-01 23:52:25.244922 | orchestrator | Sunday 01 June 2025 23:50:12 +0000 (0:00:01.588) 0:04:20.362 *********** 2025-06-01 23:52:25.244928 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-01 23:52:25.244935 | orchestrator | 2025-06-01 23:52:25.244946 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-01 23:52:25.244953 | orchestrator | Sunday 01 June 2025 23:50:13 +0000 (0:00:01.092) 0:04:21.454 *********** 2025-06-01 23:52:25.244960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 23:52:25.244987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 23:52:25.244995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-01 23:52:25.245003 | orchestrator | 2025-06-01 23:52:25.245033 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-01 23:52:25.245043 | orchestrator | Sunday 01 June 2025 23:50:17 +0000 (0:00:03.730) 0:04:25.185 *********** 2025-06-01 23:52:25.245061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245073 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245088 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245103 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245110 | orchestrator | 2025-06-01 23:52:25.245118 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-01 23:52:25.245125 | orchestrator | Sunday 01 June 2025 23:50:18 +0000 (0:00:01.308) 0:04:26.494 *********** 2025-06-01 23:52:25.245138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 23:52:25.245145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 23:52:25.245153 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 23:52:25.245168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 23:52:25.245175 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 23:52:25.245190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-01 23:52:25.245197 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245204 | orchestrator | 2025-06-01 23:52:25.245211 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 23:52:25.245218 | orchestrator | Sunday 01 June 2025 23:50:20 +0000 (0:00:01.835) 0:04:28.329 *********** 2025-06-01 23:52:25.245225 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.245232 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.245239 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.245246 | orchestrator | 2025-06-01 23:52:25.245253 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 23:52:25.245260 | orchestrator | Sunday 01 June 2025 23:50:22 +0000 (0:00:02.350) 0:04:30.680 *********** 2025-06-01 23:52:25.245267 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.245274 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.245281 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.245288 | orchestrator | 2025-06-01 23:52:25.245319 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-01 23:52:25.245331 | orchestrator | Sunday 01 June 2025 23:50:25 +0000 (0:00:03.007) 0:04:33.687 *********** 2025-06-01 23:52:25.245341 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-01 23:52:25.245348 | orchestrator | 2025-06-01 23:52:25.245354 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-01 23:52:25.245360 | orchestrator | Sunday 01 June 2025 23:50:26 +0000 (0:00:00.802) 0:04:34.490 *********** 2025-06-01 23:52:25.245371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245382 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245395 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245408 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245414 | orchestrator | 2025-06-01 23:52:25.245420 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-01 23:52:25.245426 | orchestrator | Sunday 01 June 2025 23:50:28 +0000 (0:00:01.290) 0:04:35.780 *********** 2025-06-01 23:52:25.245433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245439 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245452 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-01 23:52:25.245465 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245471 | orchestrator | 2025-06-01 23:52:25.245495 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-01 23:52:25.245502 | orchestrator | Sunday 01 June 2025 23:50:29 +0000 (0:00:01.627) 0:04:37.408 *********** 2025-06-01 23:52:25.245508 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245514 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245520 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245526 | orchestrator | 2025-06-01 23:52:25.245533 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 23:52:25.245544 | orchestrator | Sunday 01 June 2025 23:50:30 +0000 (0:00:01.259) 0:04:38.667 *********** 2025-06-01 23:52:25.245550 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.245556 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.245562 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.245568 | orchestrator | 2025-06-01 23:52:25.245578 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 23:52:25.245584 | orchestrator | Sunday 01 June 2025 23:50:33 +0000 (0:00:02.193) 0:04:40.861 *********** 2025-06-01 23:52:25.245590 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.245597 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.245603 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.245611 | orchestrator | 2025-06-01 23:52:25.245623 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-01 23:52:25.245634 | orchestrator | Sunday 01 June 2025 23:50:36 +0000 (0:00:03.027) 0:04:43.888 *********** 2025-06-01 23:52:25.245642 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-01 23:52:25.245648 | orchestrator | 2025-06-01 23:52:25.245654 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-01 23:52:25.245660 | orchestrator | Sunday 01 June 2025 23:50:37 +0000 (0:00:01.107) 0:04:44.995 *********** 2025-06-01 23:52:25.245667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 23:52:25.245673 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 23:52:25.245686 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 23:52:25.245699 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245705 | orchestrator | 2025-06-01 23:52:25.245711 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-01 23:52:25.245717 | orchestrator | Sunday 01 June 2025 23:50:38 +0000 (0:00:01.062) 0:04:46.058 *********** 2025-06-01 23:52:25.245723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 23:52:25.245734 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 23:52:25.245767 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-01 23:52:25.245784 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245790 | orchestrator | 2025-06-01 23:52:25.245797 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-01 23:52:25.245803 | orchestrator | Sunday 01 June 2025 23:50:39 +0000 (0:00:01.235) 0:04:47.294 *********** 2025-06-01 23:52:25.245809 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.245815 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.245821 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.245827 | orchestrator | 2025-06-01 23:52:25.245833 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-01 23:52:25.245840 | orchestrator | Sunday 01 June 2025 23:50:41 +0000 (0:00:01.796) 0:04:49.090 *********** 2025-06-01 23:52:25.245846 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.245852 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.245858 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.245864 | orchestrator | 2025-06-01 23:52:25.245870 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-01 23:52:25.245876 | orchestrator | Sunday 01 June 2025 23:50:43 +0000 (0:00:02.251) 0:04:51.342 *********** 2025-06-01 23:52:25.245882 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.245888 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.245897 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.245909 | orchestrator | 2025-06-01 23:52:25.245920 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-01 23:52:25.245928 | orchestrator | Sunday 01 June 2025 23:50:46 +0000 (0:00:03.232) 0:04:54.574 *********** 2025-06-01 23:52:25.245935 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.245941 | orchestrator | 2025-06-01 23:52:25.245947 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-01 23:52:25.245953 | orchestrator | Sunday 01 June 2025 23:50:48 +0000 (0:00:01.311) 0:04:55.885 *********** 2025-06-01 23:52:25.245960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.246042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:52:25.246051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.246105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.246111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:52:25.246122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.246161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.246166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:52:25.246172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.246193 | orchestrator | 2025-06-01 23:52:25.246198 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-01 23:52:25.246204 | orchestrator | Sunday 01 June 2025 23:50:51 +0000 (0:00:03.661) 0:04:59.547 *********** 2025-06-01 23:52:25.246227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.246234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:52:25.246240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.246260 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.246267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.246299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:52:25.246310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.246331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.246337 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.246342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-01 23:52:25.246363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-01 23:52:25.246378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-01 23:52:25.246384 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.246389 | orchestrator | 2025-06-01 23:52:25.246395 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-01 23:52:25.246400 | orchestrator | Sunday 01 June 2025 23:50:52 +0000 (0:00:00.800) 0:05:00.348 *********** 2025-06-01 23:52:25.246406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 23:52:25.246415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 23:52:25.246421 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.246427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 23:52:25.246433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 23:52:25.246438 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.246444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 23:52:25.246449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-01 23:52:25.246455 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.246460 | orchestrator | 2025-06-01 23:52:25.246465 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-01 23:52:25.246471 | orchestrator | Sunday 01 June 2025 23:50:53 +0000 (0:00:00.897) 0:05:01.246 *********** 2025-06-01 23:52:25.246476 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.246482 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.246487 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.246492 | orchestrator | 2025-06-01 23:52:25.246497 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-01 23:52:25.246503 | orchestrator | Sunday 01 June 2025 23:50:55 +0000 (0:00:01.805) 0:05:03.051 *********** 2025-06-01 23:52:25.246508 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.246513 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.246519 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.246524 | orchestrator | 2025-06-01 23:52:25.246529 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-01 23:52:25.246535 | orchestrator | Sunday 01 June 2025 23:50:57 +0000 (0:00:02.080) 0:05:05.131 *********** 2025-06-01 23:52:25.246540 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.246545 | orchestrator | 2025-06-01 23:52:25.246551 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-01 23:52:25.246556 | orchestrator | Sunday 01 June 2025 23:50:58 +0000 (0:00:01.365) 0:05:06.497 *********** 2025-06-01 23:52:25.246580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:52:25.246587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:52:25.246600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:52:25.246606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:52:25.246628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:52:25.246638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:52:25.246649 | orchestrator | 2025-06-01 23:52:25.246654 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-01 23:52:25.246660 | orchestrator | Sunday 01 June 2025 23:51:04 +0000 (0:00:05.251) 0:05:11.749 *********** 2025-06-01 23:52:25.246666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:52:25.246672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:52:25.246678 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.246698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:52:25.246710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:52:25.246719 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.246725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:52:25.246731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:52:25.246737 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.246743 | orchestrator | 2025-06-01 23:52:25.246748 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-01 23:52:25.246753 | orchestrator | Sunday 01 June 2025 23:51:05 +0000 (0:00:01.029) 0:05:12.778 *********** 2025-06-01 23:52:25.246759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 23:52:25.246767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 23:52:25.246804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 23:52:25.246811 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.246817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 23:52:25.246827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 23:52:25.246836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 23:52:25.246841 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.246847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-01 23:52:25.246853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 23:52:25.246858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-01 23:52:25.246864 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.246869 | orchestrator | 2025-06-01 23:52:25.246875 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-01 23:52:25.246880 | orchestrator | Sunday 01 June 2025 23:51:06 +0000 (0:00:01.000) 0:05:13.778 *********** 2025-06-01 23:52:25.246886 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.246891 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.246897 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.246902 | orchestrator | 2025-06-01 23:52:25.246907 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-01 23:52:25.246913 | orchestrator | Sunday 01 June 2025 23:51:06 +0000 (0:00:00.435) 0:05:14.214 *********** 2025-06-01 23:52:25.246918 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.246923 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.246931 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.246941 | orchestrator | 2025-06-01 23:52:25.246951 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-01 23:52:25.246959 | orchestrator | Sunday 01 June 2025 23:51:07 +0000 (0:00:01.390) 0:05:15.605 *********** 2025-06-01 23:52:25.246965 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.246989 | orchestrator | 2025-06-01 23:52:25.246995 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-01 23:52:25.247000 | orchestrator | Sunday 01 June 2025 23:51:09 +0000 (0:00:01.742) 0:05:17.348 *********** 2025-06-01 23:52:25.247006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:52:25.247012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:52:25.247042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:52:25.247065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:52:25.247077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-01 23:52:25.247110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:52:25.247132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:52:25.247163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 23:52:25.247173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:52:25.247184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-01 23:52:25.247204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 23:52:25.247213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 23:52:25.247224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247265 | orchestrator | 2025-06-01 23:52:25.247274 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-01 23:52:25.247279 | orchestrator | Sunday 01 June 2025 23:51:13 +0000 (0:00:04.069) 0:05:21.417 *********** 2025-06-01 23:52:25.247285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 23:52:25.247291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:52:25.247296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 23:52:25.247333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 23:52:25.247339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247360 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 23:52:25.247374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:52:25.247380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 23:52:25.247410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 23:52:25.247416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247439 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-01 23:52:25.247450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-01 23:52:25.247462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-01 23:52:25.247492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-01 23:52:25.247498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-01 23:52:25.247513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-01 23:52:25.247518 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247524 | orchestrator | 2025-06-01 23:52:25.247529 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-01 23:52:25.247535 | orchestrator | Sunday 01 June 2025 23:51:15 +0000 (0:00:01.540) 0:05:22.957 *********** 2025-06-01 23:52:25.247540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 23:52:25.247546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 23:52:25.247552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 23:52:25.247560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 23:52:25.247567 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 23:52:25.247578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 23:52:25.247587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 23:52:25.247593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 23:52:25.247598 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-01 23:52:25.247613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-01 23:52:25.247618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 23:52:25.247624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-01 23:52:25.247630 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247635 | orchestrator | 2025-06-01 23:52:25.247641 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-01 23:52:25.247646 | orchestrator | Sunday 01 June 2025 23:51:16 +0000 (0:00:01.077) 0:05:24.035 *********** 2025-06-01 23:52:25.247652 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247657 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247663 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247668 | orchestrator | 2025-06-01 23:52:25.247677 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-01 23:52:25.247682 | orchestrator | Sunday 01 June 2025 23:51:16 +0000 (0:00:00.425) 0:05:24.460 *********** 2025-06-01 23:52:25.247688 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247693 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247699 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247704 | orchestrator | 2025-06-01 23:52:25.247714 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-01 23:52:25.247724 | orchestrator | Sunday 01 June 2025 23:51:18 +0000 (0:00:01.701) 0:05:26.162 *********** 2025-06-01 23:52:25.247734 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.247742 | orchestrator | 2025-06-01 23:52:25.247747 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-01 23:52:25.247753 | orchestrator | Sunday 01 June 2025 23:51:20 +0000 (0:00:01.739) 0:05:27.901 *********** 2025-06-01 23:52:25.247761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:52:25.247771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:52:25.247781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-01 23:52:25.247787 | orchestrator | 2025-06-01 23:52:25.247793 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-01 23:52:25.247798 | orchestrator | Sunday 01 June 2025 23:51:22 +0000 (0:00:02.562) 0:05:30.464 *********** 2025-06-01 23:52:25.247804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 23:52:25.247810 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 23:52:25.247824 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-01 23:52:25.247846 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247855 | orchestrator | 2025-06-01 23:52:25.247863 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-01 23:52:25.247871 | orchestrator | Sunday 01 June 2025 23:51:23 +0000 (0:00:00.371) 0:05:30.836 *********** 2025-06-01 23:52:25.247879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 23:52:25.247888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 23:52:25.247897 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247906 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-01 23:52:25.247923 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247932 | orchestrator | 2025-06-01 23:52:25.247941 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-01 23:52:25.247948 | orchestrator | Sunday 01 June 2025 23:51:24 +0000 (0:00:01.084) 0:05:31.921 *********** 2025-06-01 23:52:25.247954 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.247959 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.247964 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.247990 | orchestrator | 2025-06-01 23:52:25.247996 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-01 23:52:25.248002 | orchestrator | Sunday 01 June 2025 23:51:24 +0000 (0:00:00.455) 0:05:32.377 *********** 2025-06-01 23:52:25.248007 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248012 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248018 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248023 | orchestrator | 2025-06-01 23:52:25.248028 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-01 23:52:25.248034 | orchestrator | Sunday 01 June 2025 23:51:25 +0000 (0:00:01.362) 0:05:33.740 *********** 2025-06-01 23:52:25.248039 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:52:25.248044 | orchestrator | 2025-06-01 23:52:25.248050 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-01 23:52:25.248055 | orchestrator | Sunday 01 June 2025 23:51:27 +0000 (0:00:01.792) 0:05:35.533 *********** 2025-06-01 23:52:25.248062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.248092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.248099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.248105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.248112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.248125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-01 23:52:25.248131 | orchestrator | 2025-06-01 23:52:25.248137 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-01 23:52:25.248142 | orchestrator | Sunday 01 June 2025 23:51:34 +0000 (0:00:06.288) 0:05:41.821 *********** 2025-06-01 23:52:25.248150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.248156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.248162 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.248180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.248186 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.248198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-01 23:52:25.248203 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248208 | orchestrator | 2025-06-01 23:52:25.248214 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-01 23:52:25.248219 | orchestrator | Sunday 01 June 2025 23:51:34 +0000 (0:00:00.646) 0:05:42.468 *********** 2025-06-01 23:52:25.248225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248295 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248323 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-01 23:52:25.248357 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248363 | orchestrator | 2025-06-01 23:52:25.248368 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-01 23:52:25.248374 | orchestrator | Sunday 01 June 2025 23:51:36 +0000 (0:00:01.820) 0:05:44.289 *********** 2025-06-01 23:52:25.248380 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.248385 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.248390 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.248396 | orchestrator | 2025-06-01 23:52:25.248401 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-01 23:52:25.248407 | orchestrator | Sunday 01 June 2025 23:51:37 +0000 (0:00:01.296) 0:05:45.585 *********** 2025-06-01 23:52:25.248412 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.248417 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.248423 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.248428 | orchestrator | 2025-06-01 23:52:25.248433 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-01 23:52:25.248439 | orchestrator | Sunday 01 June 2025 23:51:39 +0000 (0:00:02.139) 0:05:47.725 *********** 2025-06-01 23:52:25.248444 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248449 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248455 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248460 | orchestrator | 2025-06-01 23:52:25.248466 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-01 23:52:25.248475 | orchestrator | Sunday 01 June 2025 23:51:40 +0000 (0:00:00.318) 0:05:48.044 *********** 2025-06-01 23:52:25.248480 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248486 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248495 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248504 | orchestrator | 2025-06-01 23:52:25.248513 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-01 23:52:25.248520 | orchestrator | Sunday 01 June 2025 23:51:40 +0000 (0:00:00.327) 0:05:48.371 *********** 2025-06-01 23:52:25.248526 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248531 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248536 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248541 | orchestrator | 2025-06-01 23:52:25.248547 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-01 23:52:25.248552 | orchestrator | Sunday 01 June 2025 23:51:41 +0000 (0:00:00.711) 0:05:49.082 *********** 2025-06-01 23:52:25.248557 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248563 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248568 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248573 | orchestrator | 2025-06-01 23:52:25.248578 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-01 23:52:25.248584 | orchestrator | Sunday 01 June 2025 23:51:41 +0000 (0:00:00.387) 0:05:49.470 *********** 2025-06-01 23:52:25.248589 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248594 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248599 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248605 | orchestrator | 2025-06-01 23:52:25.248610 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-01 23:52:25.248615 | orchestrator | Sunday 01 June 2025 23:51:42 +0000 (0:00:00.301) 0:05:49.771 *********** 2025-06-01 23:52:25.248620 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.248626 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.248631 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.248636 | orchestrator | 2025-06-01 23:52:25.248642 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-01 23:52:25.248647 | orchestrator | Sunday 01 June 2025 23:51:42 +0000 (0:00:00.819) 0:05:50.591 *********** 2025-06-01 23:52:25.248652 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248658 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248663 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248668 | orchestrator | 2025-06-01 23:52:25.248674 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-01 23:52:25.248681 | orchestrator | Sunday 01 June 2025 23:51:43 +0000 (0:00:00.640) 0:05:51.231 *********** 2025-06-01 23:52:25.248690 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248699 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248706 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248711 | orchestrator | 2025-06-01 23:52:25.248717 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-01 23:52:25.248722 | orchestrator | Sunday 01 June 2025 23:51:43 +0000 (0:00:00.345) 0:05:51.577 *********** 2025-06-01 23:52:25.248727 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248732 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248738 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248743 | orchestrator | 2025-06-01 23:52:25.248748 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-01 23:52:25.248754 | orchestrator | Sunday 01 June 2025 23:51:44 +0000 (0:00:00.852) 0:05:52.429 *********** 2025-06-01 23:52:25.248759 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248764 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248772 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248778 | orchestrator | 2025-06-01 23:52:25.248783 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-01 23:52:25.248789 | orchestrator | Sunday 01 June 2025 23:51:45 +0000 (0:00:01.262) 0:05:53.692 *********** 2025-06-01 23:52:25.248798 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248803 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248809 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248816 | orchestrator | 2025-06-01 23:52:25.248825 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-01 23:52:25.248835 | orchestrator | Sunday 01 June 2025 23:51:46 +0000 (0:00:00.852) 0:05:54.544 *********** 2025-06-01 23:52:25.248843 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.248848 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.248857 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.248862 | orchestrator | 2025-06-01 23:52:25.248867 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-01 23:52:25.248873 | orchestrator | Sunday 01 June 2025 23:51:51 +0000 (0:00:04.947) 0:05:59.492 *********** 2025-06-01 23:52:25.248878 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248883 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248888 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248894 | orchestrator | 2025-06-01 23:52:25.248899 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-01 23:52:25.248905 | orchestrator | Sunday 01 June 2025 23:51:55 +0000 (0:00:03.689) 0:06:03.182 *********** 2025-06-01 23:52:25.248910 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.248915 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.248920 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.248926 | orchestrator | 2025-06-01 23:52:25.248931 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-01 23:52:25.248937 | orchestrator | Sunday 01 June 2025 23:52:04 +0000 (0:00:08.576) 0:06:11.758 *********** 2025-06-01 23:52:25.248942 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.248947 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.248952 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.248958 | orchestrator | 2025-06-01 23:52:25.248963 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-01 23:52:25.248982 | orchestrator | Sunday 01 June 2025 23:52:07 +0000 (0:00:03.741) 0:06:15.500 *********** 2025-06-01 23:52:25.248988 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:52:25.248993 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:52:25.248999 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:52:25.249004 | orchestrator | 2025-06-01 23:52:25.249010 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-01 23:52:25.249015 | orchestrator | Sunday 01 June 2025 23:52:17 +0000 (0:00:09.551) 0:06:25.051 *********** 2025-06-01 23:52:25.249020 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.249026 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.249031 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.249050 | orchestrator | 2025-06-01 23:52:25.249056 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-01 23:52:25.249061 | orchestrator | Sunday 01 June 2025 23:52:17 +0000 (0:00:00.331) 0:06:25.383 *********** 2025-06-01 23:52:25.249067 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.249072 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.249077 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.249082 | orchestrator | 2025-06-01 23:52:25.249095 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-01 23:52:25.249100 | orchestrator | Sunday 01 June 2025 23:52:18 +0000 (0:00:00.708) 0:06:26.092 *********** 2025-06-01 23:52:25.249106 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.249115 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.249125 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.249135 | orchestrator | 2025-06-01 23:52:25.249140 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-01 23:52:25.249145 | orchestrator | Sunday 01 June 2025 23:52:18 +0000 (0:00:00.336) 0:06:26.428 *********** 2025-06-01 23:52:25.249151 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.249156 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.249166 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.249171 | orchestrator | 2025-06-01 23:52:25.249176 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-01 23:52:25.249182 | orchestrator | Sunday 01 June 2025 23:52:19 +0000 (0:00:00.331) 0:06:26.760 *********** 2025-06-01 23:52:25.249187 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.249192 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.249198 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.249203 | orchestrator | 2025-06-01 23:52:25.249208 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-01 23:52:25.249214 | orchestrator | Sunday 01 June 2025 23:52:19 +0000 (0:00:00.334) 0:06:27.094 *********** 2025-06-01 23:52:25.249219 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:52:25.249224 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:52:25.249229 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:52:25.249235 | orchestrator | 2025-06-01 23:52:25.249240 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-01 23:52:25.249245 | orchestrator | Sunday 01 June 2025 23:52:20 +0000 (0:00:00.673) 0:06:27.768 *********** 2025-06-01 23:52:25.249251 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.249256 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.249261 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.249266 | orchestrator | 2025-06-01 23:52:25.249272 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-01 23:52:25.249277 | orchestrator | Sunday 01 June 2025 23:52:20 +0000 (0:00:00.931) 0:06:28.699 *********** 2025-06-01 23:52:25.249282 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:52:25.249288 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:52:25.249293 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:52:25.249298 | orchestrator | 2025-06-01 23:52:25.249304 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:52:25.249309 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 23:52:25.249318 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 23:52:25.249324 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-01 23:52:25.249330 | orchestrator | 2025-06-01 23:52:25.249335 | orchestrator | 2025-06-01 23:52:25.249342 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:52:25.249352 | orchestrator | Sunday 01 June 2025 23:52:21 +0000 (0:00:00.854) 0:06:29.554 *********** 2025-06-01 23:52:25.249361 | orchestrator | =============================================================================== 2025-06-01 23:52:25.249371 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.55s 2025-06-01 23:52:25.249377 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.58s 2025-06-01 23:52:25.249382 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.29s 2025-06-01 23:52:25.249387 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.02s 2025-06-01 23:52:25.249392 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.80s 2025-06-01 23:52:25.249398 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.63s 2025-06-01 23:52:25.249403 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.25s 2025-06-01 23:52:25.249408 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.95s 2025-06-01 23:52:25.249413 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.72s 2025-06-01 23:52:25.249419 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.50s 2025-06-01 23:52:25.249424 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.46s 2025-06-01 23:52:25.249434 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.11s 2025-06-01 23:52:25.249439 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.08s 2025-06-01 23:52:25.249444 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.07s 2025-06-01 23:52:25.249449 | orchestrator | proxysql-config : Copying over barbican ProxySQL users config ----------- 4.00s 2025-06-01 23:52:25.249455 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.82s 2025-06-01 23:52:25.249460 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.77s 2025-06-01 23:52:25.249465 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.74s 2025-06-01 23:52:25.249471 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.73s 2025-06-01 23:52:25.249476 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.73s 2025-06-01 23:52:25.249481 | orchestrator | 2025-06-01 23:52:25 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:25.249487 | orchestrator | 2025-06-01 23:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:28.300539 | orchestrator | 2025-06-01 23:52:28 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:28.300666 | orchestrator | 2025-06-01 23:52:28 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:28.300683 | orchestrator | 2025-06-01 23:52:28 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:28.300704 | orchestrator | 2025-06-01 23:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:31.344097 | orchestrator | 2025-06-01 23:52:31 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:31.344373 | orchestrator | 2025-06-01 23:52:31 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:31.344919 | orchestrator | 2025-06-01 23:52:31 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:31.344958 | orchestrator | 2025-06-01 23:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:34.385140 | orchestrator | 2025-06-01 23:52:34 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:34.385574 | orchestrator | 2025-06-01 23:52:34 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:34.386759 | orchestrator | 2025-06-01 23:52:34 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:34.388772 | orchestrator | 2025-06-01 23:52:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:37.427263 | orchestrator | 2025-06-01 23:52:37 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:37.430304 | orchestrator | 2025-06-01 23:52:37 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:37.432661 | orchestrator | 2025-06-01 23:52:37 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:37.433294 | orchestrator | 2025-06-01 23:52:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:40.460430 | orchestrator | 2025-06-01 23:52:40 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:40.461733 | orchestrator | 2025-06-01 23:52:40 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:40.462011 | orchestrator | 2025-06-01 23:52:40 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:40.462518 | orchestrator | 2025-06-01 23:52:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:43.504315 | orchestrator | 2025-06-01 23:52:43 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:43.504438 | orchestrator | 2025-06-01 23:52:43 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:43.504464 | orchestrator | 2025-06-01 23:52:43 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:43.504771 | orchestrator | 2025-06-01 23:52:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:46.549810 | orchestrator | 2025-06-01 23:52:46 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:46.550433 | orchestrator | 2025-06-01 23:52:46 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:46.551595 | orchestrator | 2025-06-01 23:52:46 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:46.551635 | orchestrator | 2025-06-01 23:52:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:49.591367 | orchestrator | 2025-06-01 23:52:49 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:49.591500 | orchestrator | 2025-06-01 23:52:49 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:49.593342 | orchestrator | 2025-06-01 23:52:49 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:49.593367 | orchestrator | 2025-06-01 23:52:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:52.646579 | orchestrator | 2025-06-01 23:52:52 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:52.647462 | orchestrator | 2025-06-01 23:52:52 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:52.650261 | orchestrator | 2025-06-01 23:52:52 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:52.650763 | orchestrator | 2025-06-01 23:52:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:55.698201 | orchestrator | 2025-06-01 23:52:55 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:55.701021 | orchestrator | 2025-06-01 23:52:55 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:55.702108 | orchestrator | 2025-06-01 23:52:55 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:55.702171 | orchestrator | 2025-06-01 23:52:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:52:58.732938 | orchestrator | 2025-06-01 23:52:58 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:52:58.735870 | orchestrator | 2025-06-01 23:52:58 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:52:58.742622 | orchestrator | 2025-06-01 23:52:58 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:52:58.742997 | orchestrator | 2025-06-01 23:52:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:01.793045 | orchestrator | 2025-06-01 23:53:01 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:01.793643 | orchestrator | 2025-06-01 23:53:01 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:01.794681 | orchestrator | 2025-06-01 23:53:01 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:01.794693 | orchestrator | 2025-06-01 23:53:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:04.844468 | orchestrator | 2025-06-01 23:53:04 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:04.846323 | orchestrator | 2025-06-01 23:53:04 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:04.848296 | orchestrator | 2025-06-01 23:53:04 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:04.848541 | orchestrator | 2025-06-01 23:53:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:07.894288 | orchestrator | 2025-06-01 23:53:07 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:07.895460 | orchestrator | 2025-06-01 23:53:07 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:07.897665 | orchestrator | 2025-06-01 23:53:07 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:07.897783 | orchestrator | 2025-06-01 23:53:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:10.935924 | orchestrator | 2025-06-01 23:53:10 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:10.936259 | orchestrator | 2025-06-01 23:53:10 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:10.939933 | orchestrator | 2025-06-01 23:53:10 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:10.940015 | orchestrator | 2025-06-01 23:53:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:13.976065 | orchestrator | 2025-06-01 23:53:13 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:13.979852 | orchestrator | 2025-06-01 23:53:13 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:13.982158 | orchestrator | 2025-06-01 23:53:13 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:13.982351 | orchestrator | 2025-06-01 23:53:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:17.025249 | orchestrator | 2025-06-01 23:53:17 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:17.026945 | orchestrator | 2025-06-01 23:53:17 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:17.029775 | orchestrator | 2025-06-01 23:53:17 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:17.029816 | orchestrator | 2025-06-01 23:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:20.084459 | orchestrator | 2025-06-01 23:53:20 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:20.086870 | orchestrator | 2025-06-01 23:53:20 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:20.092253 | orchestrator | 2025-06-01 23:53:20 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:20.092329 | orchestrator | 2025-06-01 23:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:23.140316 | orchestrator | 2025-06-01 23:53:23 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:23.142281 | orchestrator | 2025-06-01 23:53:23 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:23.143723 | orchestrator | 2025-06-01 23:53:23 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:23.143858 | orchestrator | 2025-06-01 23:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:26.192616 | orchestrator | 2025-06-01 23:53:26 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:26.193770 | orchestrator | 2025-06-01 23:53:26 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:26.194705 | orchestrator | 2025-06-01 23:53:26 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:26.194726 | orchestrator | 2025-06-01 23:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:29.252397 | orchestrator | 2025-06-01 23:53:29 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:29.255104 | orchestrator | 2025-06-01 23:53:29 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:29.258216 | orchestrator | 2025-06-01 23:53:29 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:29.258639 | orchestrator | 2025-06-01 23:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:32.300692 | orchestrator | 2025-06-01 23:53:32 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:32.303078 | orchestrator | 2025-06-01 23:53:32 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:32.306204 | orchestrator | 2025-06-01 23:53:32 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:32.306601 | orchestrator | 2025-06-01 23:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:35.364915 | orchestrator | 2025-06-01 23:53:35 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:35.366856 | orchestrator | 2025-06-01 23:53:35 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:35.368780 | orchestrator | 2025-06-01 23:53:35 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:35.368812 | orchestrator | 2025-06-01 23:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:38.423548 | orchestrator | 2025-06-01 23:53:38 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:38.425353 | orchestrator | 2025-06-01 23:53:38 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:38.427358 | orchestrator | 2025-06-01 23:53:38 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:38.427395 | orchestrator | 2025-06-01 23:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:41.483418 | orchestrator | 2025-06-01 23:53:41 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:41.485816 | orchestrator | 2025-06-01 23:53:41 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:41.487780 | orchestrator | 2025-06-01 23:53:41 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:41.487828 | orchestrator | 2025-06-01 23:53:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:44.546358 | orchestrator | 2025-06-01 23:53:44 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:44.548264 | orchestrator | 2025-06-01 23:53:44 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:44.549990 | orchestrator | 2025-06-01 23:53:44 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:44.550116 | orchestrator | 2025-06-01 23:53:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:47.600247 | orchestrator | 2025-06-01 23:53:47 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:47.601294 | orchestrator | 2025-06-01 23:53:47 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:47.604481 | orchestrator | 2025-06-01 23:53:47 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:47.604531 | orchestrator | 2025-06-01 23:53:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:50.664685 | orchestrator | 2025-06-01 23:53:50 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:50.665544 | orchestrator | 2025-06-01 23:53:50 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:50.666754 | orchestrator | 2025-06-01 23:53:50 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:50.666878 | orchestrator | 2025-06-01 23:53:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:53.713016 | orchestrator | 2025-06-01 23:53:53 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:53.715032 | orchestrator | 2025-06-01 23:53:53 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:53.716558 | orchestrator | 2025-06-01 23:53:53 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:53.716606 | orchestrator | 2025-06-01 23:53:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:56.765900 | orchestrator | 2025-06-01 23:53:56 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:56.770165 | orchestrator | 2025-06-01 23:53:56 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:56.772515 | orchestrator | 2025-06-01 23:53:56 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:56.772551 | orchestrator | 2025-06-01 23:53:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:53:59.825797 | orchestrator | 2025-06-01 23:53:59 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:53:59.827582 | orchestrator | 2025-06-01 23:53:59 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:53:59.830846 | orchestrator | 2025-06-01 23:53:59 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:53:59.830906 | orchestrator | 2025-06-01 23:53:59 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:02.879907 | orchestrator | 2025-06-01 23:54:02 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:02.880063 | orchestrator | 2025-06-01 23:54:02 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:54:02.880073 | orchestrator | 2025-06-01 23:54:02 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:02.880082 | orchestrator | 2025-06-01 23:54:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:05.938577 | orchestrator | 2025-06-01 23:54:05 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:05.942440 | orchestrator | 2025-06-01 23:54:05 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:54:05.944024 | orchestrator | 2025-06-01 23:54:05 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:05.944862 | orchestrator | 2025-06-01 23:54:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:08.994625 | orchestrator | 2025-06-01 23:54:08 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:08.998263 | orchestrator | 2025-06-01 23:54:08 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:54:08.999143 | orchestrator | 2025-06-01 23:54:08 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:08.999169 | orchestrator | 2025-06-01 23:54:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:12.055229 | orchestrator | 2025-06-01 23:54:12 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:12.056167 | orchestrator | 2025-06-01 23:54:12 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:54:12.057467 | orchestrator | 2025-06-01 23:54:12 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:12.057501 | orchestrator | 2025-06-01 23:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:15.105411 | orchestrator | 2025-06-01 23:54:15 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:15.108538 | orchestrator | 2025-06-01 23:54:15 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:54:15.110733 | orchestrator | 2025-06-01 23:54:15 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:15.110789 | orchestrator | 2025-06-01 23:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:18.159049 | orchestrator | 2025-06-01 23:54:18 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:18.160408 | orchestrator | 2025-06-01 23:54:18 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state STARTED 2025-06-01 23:54:18.161895 | orchestrator | 2025-06-01 23:54:18 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:18.162267 | orchestrator | 2025-06-01 23:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:21.214386 | orchestrator | 2025-06-01 23:54:21 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:21.223754 | orchestrator | 2025-06-01 23:54:21.223823 | orchestrator | 2025-06-01 23:54:21.223837 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-01 23:54:21.223850 | orchestrator | 2025-06-01 23:54:21.223861 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-01 23:54:21.223969 | orchestrator | Sunday 01 June 2025 23:43:16 +0000 (0:00:00.672) 0:00:00.672 *********** 2025-06-01 23:54:21.223983 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.223995 | orchestrator | 2025-06-01 23:54:21.224006 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-01 23:54:21.224054 | orchestrator | Sunday 01 June 2025 23:43:17 +0000 (0:00:01.069) 0:00:01.741 *********** 2025-06-01 23:54:21.224067 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.224282 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.224294 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.224305 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.224318 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.224330 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.224342 | orchestrator | 2025-06-01 23:54:21.224356 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-01 23:54:21.224369 | orchestrator | Sunday 01 June 2025 23:43:18 +0000 (0:00:01.363) 0:00:03.104 *********** 2025-06-01 23:54:21.224381 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.224393 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.224405 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.224417 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.224429 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.224441 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.224454 | orchestrator | 2025-06-01 23:54:21.224466 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-01 23:54:21.224552 | orchestrator | Sunday 01 June 2025 23:43:19 +0000 (0:00:00.879) 0:00:03.983 *********** 2025-06-01 23:54:21.224685 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.224700 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.224713 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.224723 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.224734 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.224745 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.224755 | orchestrator | 2025-06-01 23:54:21.224766 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-01 23:54:21.224777 | orchestrator | Sunday 01 June 2025 23:43:21 +0000 (0:00:01.179) 0:00:05.163 *********** 2025-06-01 23:54:21.224788 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.224798 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.224826 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.224837 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.224847 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.224858 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.224868 | orchestrator | 2025-06-01 23:54:21.224879 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-01 23:54:21.224890 | orchestrator | Sunday 01 June 2025 23:43:22 +0000 (0:00:01.038) 0:00:06.201 *********** 2025-06-01 23:54:21.225186 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.225202 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.225214 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.225226 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.225236 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.225247 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.225258 | orchestrator | 2025-06-01 23:54:21.225269 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-01 23:54:21.225280 | orchestrator | Sunday 01 June 2025 23:43:22 +0000 (0:00:00.532) 0:00:06.733 *********** 2025-06-01 23:54:21.225290 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.225301 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.225312 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.225322 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.225333 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.225343 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.225354 | orchestrator | 2025-06-01 23:54:21.225365 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-01 23:54:21.225376 | orchestrator | Sunday 01 June 2025 23:43:23 +0000 (0:00:01.091) 0:00:07.825 *********** 2025-06-01 23:54:21.225387 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.225399 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.225410 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.225420 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.225431 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.225442 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.225452 | orchestrator | 2025-06-01 23:54:21.225463 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-01 23:54:21.225474 | orchestrator | Sunday 01 June 2025 23:43:24 +0000 (0:00:00.694) 0:00:08.519 *********** 2025-06-01 23:54:21.225485 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.225496 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.225506 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.225548 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.225559 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.225570 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.225638 | orchestrator | 2025-06-01 23:54:21.225649 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-01 23:54:21.225660 | orchestrator | Sunday 01 June 2025 23:43:25 +0000 (0:00:01.095) 0:00:09.615 *********** 2025-06-01 23:54:21.225671 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:54:21.225750 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:54:21.225775 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:54:21.225786 | orchestrator | 2025-06-01 23:54:21.225797 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-01 23:54:21.225807 | orchestrator | Sunday 01 June 2025 23:43:26 +0000 (0:00:00.845) 0:00:10.460 *********** 2025-06-01 23:54:21.225895 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.225908 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.225938 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.225949 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.225960 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.225971 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.225981 | orchestrator | 2025-06-01 23:54:21.226010 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-01 23:54:21.226089 | orchestrator | Sunday 01 June 2025 23:43:27 +0000 (0:00:01.623) 0:00:12.084 *********** 2025-06-01 23:54:21.226100 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:54:21.226256 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:54:21.226267 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:54:21.226278 | orchestrator | 2025-06-01 23:54:21.226289 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-01 23:54:21.226300 | orchestrator | Sunday 01 June 2025 23:43:31 +0000 (0:00:03.103) 0:00:15.187 *********** 2025-06-01 23:54:21.226341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 23:54:21.226353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 23:54:21.226435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 23:54:21.226446 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.226457 | orchestrator | 2025-06-01 23:54:21.226467 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-01 23:54:21.226478 | orchestrator | Sunday 01 June 2025 23:43:32 +0000 (0:00:01.423) 0:00:16.610 *********** 2025-06-01 23:54:21.226492 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226506 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226528 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.226539 | orchestrator | 2025-06-01 23:54:21.226558 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-01 23:54:21.226569 | orchestrator | Sunday 01 June 2025 23:43:33 +0000 (0:00:01.034) 0:00:17.645 *********** 2025-06-01 23:54:21.226583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226598 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226662 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226708 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.226719 | orchestrator | 2025-06-01 23:54:21.226730 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-01 23:54:21.226741 | orchestrator | Sunday 01 June 2025 23:43:33 +0000 (0:00:00.298) 0:00:17.944 *********** 2025-06-01 23:54:21.226755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-01 23:43:28.559787', 'end': '2025-06-01 23:43:28.835093', 'delta': '0:00:00.275306', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226782 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-01 23:43:29.793125', 'end': '2025-06-01 23:43:30.052703', 'delta': '0:00:00.259578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226795 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-01 23:43:30.562728', 'end': '2025-06-01 23:43:30.833689', 'delta': '0:00:00.270961', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.226807 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.226818 | orchestrator | 2025-06-01 23:54:21.226828 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-01 23:54:21.226839 | orchestrator | Sunday 01 June 2025 23:43:33 +0000 (0:00:00.190) 0:00:18.134 *********** 2025-06-01 23:54:21.226850 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.226861 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.226878 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.226889 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.226900 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.226911 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.226980 | orchestrator | 2025-06-01 23:54:21.226991 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-01 23:54:21.227002 | orchestrator | Sunday 01 June 2025 23:43:36 +0000 (0:00:02.195) 0:00:20.330 *********** 2025-06-01 23:54:21.227021 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.227032 | orchestrator | 2025-06-01 23:54:21.227043 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-01 23:54:21.227054 | orchestrator | Sunday 01 June 2025 23:43:36 +0000 (0:00:00.686) 0:00:21.017 *********** 2025-06-01 23:54:21.227065 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227076 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227086 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227096 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227106 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227115 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227125 | orchestrator | 2025-06-01 23:54:21.227134 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-01 23:54:21.227144 | orchestrator | Sunday 01 June 2025 23:43:38 +0000 (0:00:01.179) 0:00:22.196 *********** 2025-06-01 23:54:21.227154 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227163 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227173 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227182 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227192 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227201 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227211 | orchestrator | 2025-06-01 23:54:21.227220 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 23:54:21.227230 | orchestrator | Sunday 01 June 2025 23:43:40 +0000 (0:00:02.032) 0:00:24.229 *********** 2025-06-01 23:54:21.227239 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227249 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227258 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227268 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227277 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227286 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227296 | orchestrator | 2025-06-01 23:54:21.227305 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-01 23:54:21.227315 | orchestrator | Sunday 01 June 2025 23:43:41 +0000 (0:00:00.968) 0:00:25.198 *********** 2025-06-01 23:54:21.227324 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227334 | orchestrator | 2025-06-01 23:54:21.227344 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-01 23:54:21.227353 | orchestrator | Sunday 01 June 2025 23:43:41 +0000 (0:00:00.164) 0:00:25.362 *********** 2025-06-01 23:54:21.227364 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227380 | orchestrator | 2025-06-01 23:54:21.227396 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 23:54:21.227412 | orchestrator | Sunday 01 June 2025 23:43:41 +0000 (0:00:00.285) 0:00:25.648 *********** 2025-06-01 23:54:21.227429 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227445 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227460 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227477 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227488 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227497 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227507 | orchestrator | 2025-06-01 23:54:21.227516 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-01 23:54:21.227533 | orchestrator | Sunday 01 June 2025 23:43:42 +0000 (0:00:00.881) 0:00:26.530 *********** 2025-06-01 23:54:21.227543 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227553 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227562 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227572 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227581 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227591 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227600 | orchestrator | 2025-06-01 23:54:21.227609 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-01 23:54:21.227626 | orchestrator | Sunday 01 June 2025 23:43:43 +0000 (0:00:01.205) 0:00:27.735 *********** 2025-06-01 23:54:21.227636 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227645 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227655 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227664 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227673 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227682 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227692 | orchestrator | 2025-06-01 23:54:21.227701 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-01 23:54:21.227711 | orchestrator | Sunday 01 June 2025 23:43:44 +0000 (0:00:00.875) 0:00:28.610 *********** 2025-06-01 23:54:21.227721 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227730 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227739 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227748 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227758 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227767 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227777 | orchestrator | 2025-06-01 23:54:21.227786 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-01 23:54:21.227796 | orchestrator | Sunday 01 June 2025 23:43:45 +0000 (0:00:01.208) 0:00:29.819 *********** 2025-06-01 23:54:21.227805 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227815 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227824 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227833 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227843 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227852 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227862 | orchestrator | 2025-06-01 23:54:21.227871 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-01 23:54:21.227881 | orchestrator | Sunday 01 June 2025 23:43:46 +0000 (0:00:00.867) 0:00:30.686 *********** 2025-06-01 23:54:21.227890 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.227900 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.227930 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.227941 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.227950 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.227960 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.227969 | orchestrator | 2025-06-01 23:54:21.227979 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-01 23:54:21.227989 | orchestrator | Sunday 01 June 2025 23:43:47 +0000 (0:00:00.962) 0:00:31.649 *********** 2025-06-01 23:54:21.227998 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.228008 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.228017 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.228027 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.228036 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.228046 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.228055 | orchestrator | 2025-06-01 23:54:21.228065 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-01 23:54:21.228074 | orchestrator | Sunday 01 June 2025 23:43:48 +0000 (0:00:00.709) 0:00:32.358 *********** 2025-06-01 23:54:21.228085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part1', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part14', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part15', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part16', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228300 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.228310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part1', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part14', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part15', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part16', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228436 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.228447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 2025-06-01 23:54:21 | INFO  | Task 5c1eb716-0a80-4be3-ac47-0c14186d9993 is in state SUCCESS 2025-06-01 23:54:21.228565 | orchestrator | 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c', 'dm-uuid-LVM-lY000ij8spVdbwMPsuHwxRm6N8rXo1xEKM0the2kvnHN2HXreC8YTiSxCd2xa1F9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d', 'dm-uuid-LVM-GhKV0mAmfVz3OWnt6h44eSN08J1eg2uHr8WuQrYIYGQqaeCQUZTMj4etyxA5NS1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228698 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.228714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p9a3OW-9Nqb-KczT-JCE0-BEka-77tW-mDkqM7', 'scsi-0QEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4', 'scsi-SQEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7d1Hl7-0zv3-YwmZ-7i1K-4hHB-xpVU-3DfYgL', 'scsi-0QEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6', 'scsi-SQEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7', 'scsi-SQEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1', 'dm-uuid-LVM-8rltPS1zinphry04VtbqOAXIZky2BSieqrdJexquUh4cweVg01NJJXJtYXbGecAM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af', 'dm-uuid-LVM-C3VzGVCzSeDNjw2tbyMc3DGsFYajVmUTz1RoWot2AV2e2E7uQ1eGrtoZwznrehJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228886 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.228900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.228954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bcRKUj-EFOQ-SLMk-JCXp-GRU2-2XWe-xGmPQ9', 'scsi-0QEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb', 'scsi-SQEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z4dNgx-iCbZ-zdgb-84C2-wWgr-4Inq-N7foRu', 'scsi-0QEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680', 'scsi-SQEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.228999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e', 'dm-uuid-LVM-TVIPiUfVpbD6GvoWrA4o5pJFszWB8LnPPqNC08Jc6tZWE4KqZb40k4MizNg5MmR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d', 'scsi-SQEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb', 'dm-uuid-LVM-rghCdHrgDliYpWxe2d0NNWSEjMhDpqbmj0brKiNeuCxFrEVtrHkALbS5a1RhrcWe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229047 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.229057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:54:21.229164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uqqcBj-NimS-QMCO-aRaJ-qXOj-GzzO-V2iKe2', 'scsi-0QEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322', 'scsi-SQEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbm1bs-L8pD-XLHD-828b-UG3H-3g49-J97dJp', 'scsi-0QEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5', 'scsi-SQEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f', 'scsi-SQEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:54:21.229234 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.229244 | orchestrator | 2025-06-01 23:54:21.229254 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-01 23:54:21.229264 | orchestrator | Sunday 01 June 2025 23:43:50 +0000 (0:00:02.013) 0:00:34.372 *********** 2025-06-01 23:54:21.229274 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229291 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229316 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229326 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229336 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229353 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229370 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229387 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part1', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part14', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part15', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part16', 'scsi-SQEMU_QEMU_HARDDISK_82c7df1e-32ed-4306-ad14-c7acdab76517-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229406 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229417 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.229428 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229443 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229458 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229489 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229507 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229524 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229559 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part1', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part14', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part15', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part16', 'scsi-SQEMU_QEMU_HARDDISK_189a8ba1-bc59-4831-afbf-98fe97dbcace-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229570 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229602 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229822 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229840 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229861 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229889 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229968 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f3aa2206-ef16-41e3-9f26-c7be1b94f31f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.229984 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230005 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.230043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c', 'dm-uuid-LVM-lY000ij8spVdbwMPsuHwxRm6N8rXo1xEKM0the2kvnHN2HXreC8YTiSxCd2xa1F9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230057 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d', 'dm-uuid-LVM-GhKV0mAmfVz3OWnt6h44eSN08J1eg2uHr8WuQrYIYGQqaeCQUZTMj4etyxA5NS1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230102 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230197 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.230212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p9a3OW-9Nqb-KczT-JCE0-BEka-77tW-mDkqM7', 'scsi-0QEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4', 'scsi-SQEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7d1Hl7-0zv3-YwmZ-7i1K-4hHB-xpVU-3DfYgL', 'scsi-0QEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6', 'scsi-SQEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7', 'scsi-SQEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230308 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230335 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1', 'dm-uuid-LVM-8rltPS1zinphry04VtbqOAXIZky2BSieqrdJexquUh4cweVg01NJJXJtYXbGecAM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af', 'dm-uuid-LVM-C3VzGVCzSeDNjw2tbyMc3DGsFYajVmUTz1RoWot2AV2e2E7uQ1eGrtoZwznrehJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230406 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.230425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e', 'dm-uuid-LVM-TVIPiUfVpbD6GvoWrA4o5pJFszWB8LnPPqNC08Jc6tZWE4KqZb40k4MizNg5MmR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230593 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230620 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb', 'dm-uuid-LVM-rghCdHrgDliYpWxe2d0NNWSEjMhDpqbmj0brKiNeuCxFrEVtrHkALbS5a1RhrcWe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bcRKUj-EFOQ-SLMk-JCXp-GRU2-2XWe-xGmPQ9', 'scsi-0QEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb', 'scsi-SQEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z4dNgx-iCbZ-zdgb-84C2-wWgr-4Inq-N7foRu', 'scsi-0QEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680', 'scsi-SQEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230826 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d', 'scsi-SQEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230840 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230865 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.230873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uqqcBj-NimS-QMCO-aRaJ-qXOj-GzzO-V2iKe2', 'scsi-0QEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322', 'scsi-SQEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbm1bs-L8pD-XLHD-828b-UG3H-3g49-J97dJp', 'scsi-0QEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5', 'scsi-SQEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f', 'scsi-SQEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:54:21.230988 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.230996 | orchestrator | 2025-06-01 23:54:21.231004 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-01 23:54:21.231013 | orchestrator | Sunday 01 June 2025 23:43:52 +0000 (0:00:01.833) 0:00:36.205 *********** 2025-06-01 23:54:21.231021 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.231030 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.231037 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.231045 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.231053 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.231061 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.231068 | orchestrator | 2025-06-01 23:54:21.231076 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-01 23:54:21.231084 | orchestrator | Sunday 01 June 2025 23:43:53 +0000 (0:00:01.833) 0:00:38.039 *********** 2025-06-01 23:54:21.231092 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.231099 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.231107 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.231115 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.231122 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.231130 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.231138 | orchestrator | 2025-06-01 23:54:21.231146 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 23:54:21.231154 | orchestrator | Sunday 01 June 2025 23:43:54 +0000 (0:00:00.779) 0:00:38.818 *********** 2025-06-01 23:54:21.231161 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.231169 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.231177 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.231185 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231193 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231200 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231208 | orchestrator | 2025-06-01 23:54:21.231216 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 23:54:21.231224 | orchestrator | Sunday 01 June 2025 23:43:56 +0000 (0:00:01.405) 0:00:40.224 *********** 2025-06-01 23:54:21.231232 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.231240 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.231247 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.231255 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231263 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231270 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231283 | orchestrator | 2025-06-01 23:54:21.231291 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 23:54:21.231299 | orchestrator | Sunday 01 June 2025 23:43:57 +0000 (0:00:00.948) 0:00:41.173 *********** 2025-06-01 23:54:21.231307 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.231315 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.231323 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.231334 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231343 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231354 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231362 | orchestrator | 2025-06-01 23:54:21.231370 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 23:54:21.231378 | orchestrator | Sunday 01 June 2025 23:43:58 +0000 (0:00:01.109) 0:00:42.282 *********** 2025-06-01 23:54:21.231386 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.231394 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.231401 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.231409 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231417 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231424 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231432 | orchestrator | 2025-06-01 23:54:21.231440 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-01 23:54:21.231448 | orchestrator | Sunday 01 June 2025 23:43:59 +0000 (0:00:01.133) 0:00:43.416 *********** 2025-06-01 23:54:21.231456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:54:21.231464 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-01 23:54:21.231472 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 23:54:21.231479 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-01 23:54:21.231487 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-01 23:54:21.231495 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-01 23:54:21.231503 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-01 23:54:21.231511 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-01 23:54:21.231518 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-01 23:54:21.231526 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-01 23:54:21.231534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 23:54:21.231541 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-01 23:54:21.231549 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-01 23:54:21.231557 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-01 23:54:21.231565 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-01 23:54:21.231572 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-01 23:54:21.231580 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-01 23:54:21.231587 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-01 23:54:21.231595 | orchestrator | 2025-06-01 23:54:21.231603 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-01 23:54:21.231611 | orchestrator | Sunday 01 June 2025 23:44:02 +0000 (0:00:03.224) 0:00:46.640 *********** 2025-06-01 23:54:21.231619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 23:54:21.231627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 23:54:21.231634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 23:54:21.231642 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.231650 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-01 23:54:21.231658 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-01 23:54:21.231665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-01 23:54:21.231673 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.231681 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-01 23:54:21.231694 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-01 23:54:21.231701 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-01 23:54:21.231709 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.231717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 23:54:21.231725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 23:54:21.231732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 23:54:21.231740 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 23:54:21.231756 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 23:54:21.231763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 23:54:21.231771 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 23:54:21.231786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 23:54:21.231794 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 23:54:21.231802 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231809 | orchestrator | 2025-06-01 23:54:21.231817 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-01 23:54:21.231825 | orchestrator | Sunday 01 June 2025 23:44:03 +0000 (0:00:00.956) 0:00:47.596 *********** 2025-06-01 23:54:21.231833 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.231841 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.231848 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.231857 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.231865 | orchestrator | 2025-06-01 23:54:21.231873 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 23:54:21.231882 | orchestrator | Sunday 01 June 2025 23:44:05 +0000 (0:00:01.786) 0:00:49.383 *********** 2025-06-01 23:54:21.231890 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231898 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231905 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231928 | orchestrator | 2025-06-01 23:54:21.231936 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 23:54:21.231949 | orchestrator | Sunday 01 June 2025 23:44:06 +0000 (0:00:00.964) 0:00:50.348 *********** 2025-06-01 23:54:21.231957 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.231968 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.231976 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.231984 | orchestrator | 2025-06-01 23:54:21.231992 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 23:54:21.232000 | orchestrator | Sunday 01 June 2025 23:44:07 +0000 (0:00:01.150) 0:00:51.498 *********** 2025-06-01 23:54:21.232008 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.232015 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.232023 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.232030 | orchestrator | 2025-06-01 23:54:21.232038 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 23:54:21.232046 | orchestrator | Sunday 01 June 2025 23:44:07 +0000 (0:00:00.580) 0:00:52.078 *********** 2025-06-01 23:54:21.232054 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.232062 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.232070 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.232078 | orchestrator | 2025-06-01 23:54:21.232085 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 23:54:21.232093 | orchestrator | Sunday 01 June 2025 23:44:08 +0000 (0:00:00.909) 0:00:52.987 *********** 2025-06-01 23:54:21.232101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.232114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.232122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.232130 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.232138 | orchestrator | 2025-06-01 23:54:21.232146 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 23:54:21.232153 | orchestrator | Sunday 01 June 2025 23:44:09 +0000 (0:00:00.678) 0:00:53.666 *********** 2025-06-01 23:54:21.232161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.232169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.232177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.232184 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.232192 | orchestrator | 2025-06-01 23:54:21.232200 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 23:54:21.232208 | orchestrator | Sunday 01 June 2025 23:44:10 +0000 (0:00:00.748) 0:00:54.414 *********** 2025-06-01 23:54:21.232216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.232224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.232231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.232239 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.232247 | orchestrator | 2025-06-01 23:54:21.232255 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 23:54:21.232263 | orchestrator | Sunday 01 June 2025 23:44:11 +0000 (0:00:01.236) 0:00:55.650 *********** 2025-06-01 23:54:21.232270 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.232278 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.232286 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.232294 | orchestrator | 2025-06-01 23:54:21.232301 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 23:54:21.232309 | orchestrator | Sunday 01 June 2025 23:44:12 +0000 (0:00:00.882) 0:00:56.533 *********** 2025-06-01 23:54:21.232317 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 23:54:21.232325 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 23:54:21.232333 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 23:54:21.232341 | orchestrator | 2025-06-01 23:54:21.232349 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-01 23:54:21.232357 | orchestrator | Sunday 01 June 2025 23:44:13 +0000 (0:00:00.985) 0:00:57.519 *********** 2025-06-01 23:54:21.232364 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:54:21.232372 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:54:21.232380 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:54:21.232388 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-01 23:54:21.232396 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 23:54:21.232404 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 23:54:21.232411 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 23:54:21.232419 | orchestrator | 2025-06-01 23:54:21.232427 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-01 23:54:21.232435 | orchestrator | Sunday 01 June 2025 23:44:14 +0000 (0:00:01.152) 0:00:58.672 *********** 2025-06-01 23:54:21.232442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:54:21.232450 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:54:21.232458 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:54:21.232466 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-01 23:54:21.232479 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 23:54:21.232487 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 23:54:21.232495 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 23:54:21.232503 | orchestrator | 2025-06-01 23:54:21.232511 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.232518 | orchestrator | Sunday 01 June 2025 23:44:17 +0000 (0:00:02.990) 0:01:01.662 *********** 2025-06-01 23:54:21.232537 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.232546 | orchestrator | 2025-06-01 23:54:21.232554 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.232562 | orchestrator | Sunday 01 June 2025 23:44:18 +0000 (0:00:01.463) 0:01:03.125 *********** 2025-06-01 23:54:21.232569 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.232577 | orchestrator | 2025-06-01 23:54:21.232585 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.232593 | orchestrator | Sunday 01 June 2025 23:44:20 +0000 (0:00:01.735) 0:01:04.861 *********** 2025-06-01 23:54:21.232601 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.232608 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.232616 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.232624 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.232632 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.232640 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.232647 | orchestrator | 2025-06-01 23:54:21.232655 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.232663 | orchestrator | Sunday 01 June 2025 23:44:21 +0000 (0:00:00.878) 0:01:05.740 *********** 2025-06-01 23:54:21.232671 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.232679 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.232686 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.232694 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.232702 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.232710 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.232717 | orchestrator | 2025-06-01 23:54:21.232725 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.232733 | orchestrator | Sunday 01 June 2025 23:44:23 +0000 (0:00:01.906) 0:01:07.647 *********** 2025-06-01 23:54:21.232741 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.232748 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.232756 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.232764 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.232772 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.232779 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.232787 | orchestrator | 2025-06-01 23:54:21.232795 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.232803 | orchestrator | Sunday 01 June 2025 23:44:24 +0000 (0:00:01.468) 0:01:09.115 *********** 2025-06-01 23:54:21.232811 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.232818 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.232826 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.232834 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.232842 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.232849 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.232857 | orchestrator | 2025-06-01 23:54:21.232865 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.232873 | orchestrator | Sunday 01 June 2025 23:44:26 +0000 (0:00:01.499) 0:01:10.614 *********** 2025-06-01 23:54:21.232887 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.232895 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.232903 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.232910 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.232967 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.232975 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.232983 | orchestrator | 2025-06-01 23:54:21.232991 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.232998 | orchestrator | Sunday 01 June 2025 23:44:27 +0000 (0:00:00.801) 0:01:11.415 *********** 2025-06-01 23:54:21.233006 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233014 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233022 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233029 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233037 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233045 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233052 | orchestrator | 2025-06-01 23:54:21.233060 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.233068 | orchestrator | Sunday 01 June 2025 23:44:28 +0000 (0:00:00.787) 0:01:12.203 *********** 2025-06-01 23:54:21.233076 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233084 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233091 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233099 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233107 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233114 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233122 | orchestrator | 2025-06-01 23:54:21.233130 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.233138 | orchestrator | Sunday 01 June 2025 23:44:28 +0000 (0:00:00.845) 0:01:13.049 *********** 2025-06-01 23:54:21.233146 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.233153 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.233161 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.233168 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233174 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233181 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233187 | orchestrator | 2025-06-01 23:54:21.233194 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.233201 | orchestrator | Sunday 01 June 2025 23:44:29 +0000 (0:00:00.975) 0:01:14.024 *********** 2025-06-01 23:54:21.233207 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.233214 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.233220 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.233227 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233233 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233239 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233246 | orchestrator | 2025-06-01 23:54:21.233253 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.233259 | orchestrator | Sunday 01 June 2025 23:44:31 +0000 (0:00:01.454) 0:01:15.478 *********** 2025-06-01 23:54:21.233266 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233276 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233283 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233294 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233301 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233308 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233314 | orchestrator | 2025-06-01 23:54:21.233321 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.233328 | orchestrator | Sunday 01 June 2025 23:44:31 +0000 (0:00:00.574) 0:01:16.053 *********** 2025-06-01 23:54:21.233335 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.233341 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.233348 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.233354 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233361 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233373 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233380 | orchestrator | 2025-06-01 23:54:21.233386 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.233393 | orchestrator | Sunday 01 June 2025 23:44:32 +0000 (0:00:00.912) 0:01:16.966 *********** 2025-06-01 23:54:21.233400 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233406 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233412 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233419 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233426 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233432 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233439 | orchestrator | 2025-06-01 23:54:21.233445 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.233452 | orchestrator | Sunday 01 June 2025 23:44:33 +0000 (0:00:00.699) 0:01:17.666 *********** 2025-06-01 23:54:21.233459 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233465 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233472 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233478 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233485 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233491 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233498 | orchestrator | 2025-06-01 23:54:21.233505 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.233511 | orchestrator | Sunday 01 June 2025 23:44:34 +0000 (0:00:00.875) 0:01:18.541 *********** 2025-06-01 23:54:21.233518 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233525 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233531 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233538 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233545 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233551 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233558 | orchestrator | 2025-06-01 23:54:21.233564 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.233571 | orchestrator | Sunday 01 June 2025 23:44:35 +0000 (0:00:00.634) 0:01:19.176 *********** 2025-06-01 23:54:21.233578 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233584 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233591 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233597 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233604 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233610 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233617 | orchestrator | 2025-06-01 23:54:21.233624 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.233630 | orchestrator | Sunday 01 June 2025 23:44:35 +0000 (0:00:00.875) 0:01:20.052 *********** 2025-06-01 23:54:21.233637 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.233643 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.233650 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.233656 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233663 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233669 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233676 | orchestrator | 2025-06-01 23:54:21.233683 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.233689 | orchestrator | Sunday 01 June 2025 23:44:36 +0000 (0:00:00.614) 0:01:20.667 *********** 2025-06-01 23:54:21.233696 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.233702 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.233709 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.233716 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.233722 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.233729 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.233735 | orchestrator | 2025-06-01 23:54:21.233742 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.233758 | orchestrator | Sunday 01 June 2025 23:44:37 +0000 (0:00:00.818) 0:01:21.485 *********** 2025-06-01 23:54:21.233769 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.233780 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.233792 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.233803 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233814 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233825 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233832 | orchestrator | 2025-06-01 23:54:21.233839 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.233846 | orchestrator | Sunday 01 June 2025 23:44:38 +0000 (0:00:00.666) 0:01:22.151 *********** 2025-06-01 23:54:21.233852 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.233859 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.233865 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.233871 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.233878 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.233884 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.233891 | orchestrator | 2025-06-01 23:54:21.233897 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-01 23:54:21.233904 | orchestrator | Sunday 01 June 2025 23:44:39 +0000 (0:00:01.364) 0:01:23.516 *********** 2025-06-01 23:54:21.233911 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.233933 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.233939 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.233946 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.233952 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.233959 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.233965 | orchestrator | 2025-06-01 23:54:21.233972 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-01 23:54:21.233979 | orchestrator | Sunday 01 June 2025 23:44:40 +0000 (0:00:01.620) 0:01:25.136 *********** 2025-06-01 23:54:21.233989 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.233996 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.234006 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.234013 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.234068 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.234075 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.234081 | orchestrator | 2025-06-01 23:54:21.234088 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-01 23:54:21.234095 | orchestrator | Sunday 01 June 2025 23:44:42 +0000 (0:00:01.839) 0:01:26.975 *********** 2025-06-01 23:54:21.234102 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.234109 | orchestrator | 2025-06-01 23:54:21.234115 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-01 23:54:21.234122 | orchestrator | Sunday 01 June 2025 23:44:44 +0000 (0:00:01.209) 0:01:28.184 *********** 2025-06-01 23:54:21.234129 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234135 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234142 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234148 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234155 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234162 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234170 | orchestrator | 2025-06-01 23:54:21.234177 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-01 23:54:21.234184 | orchestrator | Sunday 01 June 2025 23:44:44 +0000 (0:00:00.781) 0:01:28.966 *********** 2025-06-01 23:54:21.234191 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234197 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234204 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234211 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234217 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234224 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234236 | orchestrator | 2025-06-01 23:54:21.234243 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-01 23:54:21.234250 | orchestrator | Sunday 01 June 2025 23:44:45 +0000 (0:00:00.635) 0:01:29.601 *********** 2025-06-01 23:54:21.234257 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 23:54:21.234263 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 23:54:21.234270 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 23:54:21.234277 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 23:54:21.234284 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 23:54:21.234290 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 23:54:21.234297 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 23:54:21.234303 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-01 23:54:21.234310 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 23:54:21.234316 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 23:54:21.234323 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 23:54:21.234330 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-01 23:54:21.234336 | orchestrator | 2025-06-01 23:54:21.234343 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-01 23:54:21.234349 | orchestrator | Sunday 01 June 2025 23:44:47 +0000 (0:00:01.678) 0:01:31.279 *********** 2025-06-01 23:54:21.234356 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.234363 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.234369 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.234376 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.234382 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.234389 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.234395 | orchestrator | 2025-06-01 23:54:21.234402 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-01 23:54:21.234409 | orchestrator | Sunday 01 June 2025 23:44:48 +0000 (0:00:00.887) 0:01:32.167 *********** 2025-06-01 23:54:21.234415 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234422 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234428 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234435 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234441 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234448 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234454 | orchestrator | 2025-06-01 23:54:21.234461 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-01 23:54:21.234468 | orchestrator | Sunday 01 June 2025 23:44:48 +0000 (0:00:00.863) 0:01:33.031 *********** 2025-06-01 23:54:21.234474 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234481 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234487 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234494 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234500 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234507 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234513 | orchestrator | 2025-06-01 23:54:21.234520 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-01 23:54:21.234527 | orchestrator | Sunday 01 June 2025 23:44:49 +0000 (0:00:00.617) 0:01:33.648 *********** 2025-06-01 23:54:21.234533 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234540 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234547 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234558 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234564 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234586 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234593 | orchestrator | 2025-06-01 23:54:21.234604 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-01 23:54:21.234611 | orchestrator | Sunday 01 June 2025 23:44:50 +0000 (0:00:00.868) 0:01:34.516 *********** 2025-06-01 23:54:21.234618 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.234625 | orchestrator | 2025-06-01 23:54:21.234631 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-01 23:54:21.234638 | orchestrator | Sunday 01 June 2025 23:44:51 +0000 (0:00:01.162) 0:01:35.679 *********** 2025-06-01 23:54:21.234645 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.234651 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.234658 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.234664 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.234671 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.234677 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.234684 | orchestrator | 2025-06-01 23:54:21.234690 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-01 23:54:21.234697 | orchestrator | Sunday 01 June 2025 23:45:48 +0000 (0:00:56.578) 0:02:32.258 *********** 2025-06-01 23:54:21.234704 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 23:54:21.234710 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 23:54:21.234717 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 23:54:21.234723 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234730 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 23:54:21.234737 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 23:54:21.234743 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 23:54:21.234750 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234756 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 23:54:21.234763 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 23:54:21.234770 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 23:54:21.234776 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234783 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 23:54:21.234790 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 23:54:21.234796 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 23:54:21.234803 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234809 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 23:54:21.234816 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 23:54:21.234822 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 23:54:21.234829 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234836 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-01 23:54:21.234842 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-01 23:54:21.234849 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-01 23:54:21.234855 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234862 | orchestrator | 2025-06-01 23:54:21.234869 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-01 23:54:21.234881 | orchestrator | Sunday 01 June 2025 23:45:49 +0000 (0:00:01.137) 0:02:33.395 *********** 2025-06-01 23:54:21.234887 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234894 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.234900 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.234907 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.234950 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.234958 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.234964 | orchestrator | 2025-06-01 23:54:21.234971 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-01 23:54:21.234978 | orchestrator | Sunday 01 June 2025 23:45:49 +0000 (0:00:00.721) 0:02:34.117 *********** 2025-06-01 23:54:21.234984 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.234991 | orchestrator | 2025-06-01 23:54:21.234997 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-01 23:54:21.235004 | orchestrator | Sunday 01 June 2025 23:45:50 +0000 (0:00:00.175) 0:02:34.293 *********** 2025-06-01 23:54:21.235011 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235017 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235024 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235030 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235037 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235043 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235076 | orchestrator | 2025-06-01 23:54:21.235083 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-01 23:54:21.235089 | orchestrator | Sunday 01 June 2025 23:45:51 +0000 (0:00:01.249) 0:02:35.543 *********** 2025-06-01 23:54:21.235096 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235103 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235109 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235115 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235122 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235128 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235135 | orchestrator | 2025-06-01 23:54:21.235142 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-01 23:54:21.235153 | orchestrator | Sunday 01 June 2025 23:45:52 +0000 (0:00:00.838) 0:02:36.382 *********** 2025-06-01 23:54:21.235165 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235171 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235178 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235185 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235191 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235198 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235204 | orchestrator | 2025-06-01 23:54:21.235211 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-01 23:54:21.235218 | orchestrator | Sunday 01 June 2025 23:45:53 +0000 (0:00:00.994) 0:02:37.377 *********** 2025-06-01 23:54:21.235224 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.235231 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.235238 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.235244 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.235251 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.235257 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.235264 | orchestrator | 2025-06-01 23:54:21.235270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-01 23:54:21.235277 | orchestrator | Sunday 01 June 2025 23:45:55 +0000 (0:00:02.281) 0:02:39.658 *********** 2025-06-01 23:54:21.235284 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.235290 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.235296 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.235302 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.235308 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.235314 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.235320 | orchestrator | 2025-06-01 23:54:21.235326 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-01 23:54:21.235337 | orchestrator | Sunday 01 June 2025 23:45:56 +0000 (0:00:00.755) 0:02:40.414 *********** 2025-06-01 23:54:21.235344 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.235351 | orchestrator | 2025-06-01 23:54:21.235357 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-01 23:54:21.235364 | orchestrator | Sunday 01 June 2025 23:45:57 +0000 (0:00:01.045) 0:02:41.459 *********** 2025-06-01 23:54:21.235370 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235376 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235382 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235388 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235394 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235401 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235407 | orchestrator | 2025-06-01 23:54:21.235413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-01 23:54:21.235419 | orchestrator | Sunday 01 June 2025 23:45:57 +0000 (0:00:00.635) 0:02:42.095 *********** 2025-06-01 23:54:21.235425 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235431 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235437 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235444 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235450 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235456 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235462 | orchestrator | 2025-06-01 23:54:21.235468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-01 23:54:21.235474 | orchestrator | Sunday 01 June 2025 23:45:58 +0000 (0:00:00.625) 0:02:42.720 *********** 2025-06-01 23:54:21.235480 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235486 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235492 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235498 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235505 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235511 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235517 | orchestrator | 2025-06-01 23:54:21.235523 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-01 23:54:21.235529 | orchestrator | Sunday 01 June 2025 23:45:59 +0000 (0:00:00.636) 0:02:43.357 *********** 2025-06-01 23:54:21.235535 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235541 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235547 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235553 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235559 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235565 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235571 | orchestrator | 2025-06-01 23:54:21.235578 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-01 23:54:21.235584 | orchestrator | Sunday 01 June 2025 23:46:00 +0000 (0:00:00.805) 0:02:44.162 *********** 2025-06-01 23:54:21.235590 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235596 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235602 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235608 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235614 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235620 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235626 | orchestrator | 2025-06-01 23:54:21.235632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-01 23:54:21.235638 | orchestrator | Sunday 01 June 2025 23:46:00 +0000 (0:00:00.644) 0:02:44.806 *********** 2025-06-01 23:54:21.235645 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235651 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235657 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235663 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235673 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235679 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235685 | orchestrator | 2025-06-01 23:54:21.235691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-01 23:54:21.235697 | orchestrator | Sunday 01 June 2025 23:46:01 +0000 (0:00:00.780) 0:02:45.587 *********** 2025-06-01 23:54:21.235703 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235710 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235716 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235722 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235728 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235734 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235740 | orchestrator | 2025-06-01 23:54:21.235751 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-01 23:54:21.235761 | orchestrator | Sunday 01 June 2025 23:46:02 +0000 (0:00:00.681) 0:02:46.269 *********** 2025-06-01 23:54:21.235767 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.235773 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.235779 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.235785 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.235791 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.235798 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.235804 | orchestrator | 2025-06-01 23:54:21.235810 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-01 23:54:21.235816 | orchestrator | Sunday 01 June 2025 23:46:02 +0000 (0:00:00.759) 0:02:47.028 *********** 2025-06-01 23:54:21.235822 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.235828 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.235834 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.235841 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.235847 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.235853 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.235859 | orchestrator | 2025-06-01 23:54:21.235865 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-01 23:54:21.235871 | orchestrator | Sunday 01 June 2025 23:46:04 +0000 (0:00:01.185) 0:02:48.213 *********** 2025-06-01 23:54:21.235877 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.235884 | orchestrator | 2025-06-01 23:54:21.235890 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-01 23:54:21.235896 | orchestrator | Sunday 01 June 2025 23:46:05 +0000 (0:00:01.312) 0:02:49.526 *********** 2025-06-01 23:54:21.235903 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-01 23:54:21.235909 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-01 23:54:21.235928 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-01 23:54:21.235934 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-01 23:54:21.235941 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-01 23:54:21.235947 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-01 23:54:21.235953 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-01 23:54:21.235959 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-01 23:54:21.235965 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-01 23:54:21.235971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-01 23:54:21.235977 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-01 23:54:21.235983 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-01 23:54:21.235989 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-01 23:54:21.235996 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-01 23:54:21.236002 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-01 23:54:21.236013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-01 23:54:21.236019 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-01 23:54:21.236025 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-01 23:54:21.236031 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-01 23:54:21.236037 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-01 23:54:21.236043 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-01 23:54:21.236049 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-01 23:54:21.236056 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-01 23:54:21.236062 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-01 23:54:21.236068 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-01 23:54:21.236074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-01 23:54:21.236080 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-01 23:54:21.236086 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-01 23:54:21.236092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-01 23:54:21.236098 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-01 23:54:21.236104 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-01 23:54:21.236110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-01 23:54:21.236117 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-01 23:54:21.236123 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-01 23:54:21.236129 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-01 23:54:21.236135 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-01 23:54:21.236141 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-01 23:54:21.236147 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-01 23:54:21.236153 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-01 23:54:21.236159 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-01 23:54:21.236165 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-01 23:54:21.236172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-01 23:54:21.236178 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-01 23:54:21.236184 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-01 23:54:21.236190 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 23:54:21.236207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-01 23:54:21.236213 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-01 23:54:21.236220 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 23:54:21.236226 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-01 23:54:21.236232 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 23:54:21.236238 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-01 23:54:21.236244 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 23:54:21.236250 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 23:54:21.236256 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 23:54:21.236262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 23:54:21.236269 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 23:54:21.236275 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-01 23:54:21.236290 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 23:54:21.236296 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 23:54:21.236302 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 23:54:21.236308 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 23:54:21.236314 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 23:54:21.236321 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-01 23:54:21.236327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 23:54:21.236333 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 23:54:21.236339 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 23:54:21.236345 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 23:54:21.236351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 23:54:21.236357 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-01 23:54:21.236363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 23:54:21.236369 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 23:54:21.236376 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 23:54:21.236382 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 23:54:21.236388 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 23:54:21.236394 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-01 23:54:21.236400 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 23:54:21.236406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 23:54:21.236412 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 23:54:21.236419 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 23:54:21.236425 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 23:54:21.236431 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-01 23:54:21.236437 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-01 23:54:21.236443 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 23:54:21.236449 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 23:54:21.236455 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-01 23:54:21.236462 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-01 23:54:21.236468 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-01 23:54:21.236474 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-01 23:54:21.236480 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-01 23:54:21.236486 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-01 23:54:21.236493 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-01 23:54:21.236499 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-01 23:54:21.236505 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-01 23:54:21.236511 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-01 23:54:21.236517 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-01 23:54:21.236523 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-01 23:54:21.236530 | orchestrator | 2025-06-01 23:54:21.236536 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-01 23:54:21.236546 | orchestrator | Sunday 01 June 2025 23:46:11 +0000 (0:00:06.368) 0:02:55.895 *********** 2025-06-01 23:54:21.236552 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.236559 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.236565 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.236571 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.236578 | orchestrator | 2025-06-01 23:54:21.236591 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-01 23:54:21.236598 | orchestrator | Sunday 01 June 2025 23:46:12 +0000 (0:00:01.174) 0:02:57.069 *********** 2025-06-01 23:54:21.236604 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.236611 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.236617 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.236623 | orchestrator | 2025-06-01 23:54:21.236630 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-01 23:54:21.236636 | orchestrator | Sunday 01 June 2025 23:46:13 +0000 (0:00:00.771) 0:02:57.840 *********** 2025-06-01 23:54:21.236642 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.236648 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.236654 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.236661 | orchestrator | 2025-06-01 23:54:21.236667 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-01 23:54:21.236673 | orchestrator | Sunday 01 June 2025 23:46:15 +0000 (0:00:01.735) 0:02:59.576 *********** 2025-06-01 23:54:21.236679 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.236685 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.236691 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.236698 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.236704 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.236710 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.236716 | orchestrator | 2025-06-01 23:54:21.236723 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-01 23:54:21.236729 | orchestrator | Sunday 01 June 2025 23:46:16 +0000 (0:00:00.802) 0:03:00.378 *********** 2025-06-01 23:54:21.236735 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.236741 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.236747 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.236753 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.236759 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.236765 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.236772 | orchestrator | 2025-06-01 23:54:21.236779 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-01 23:54:21.236789 | orchestrator | Sunday 01 June 2025 23:46:17 +0000 (0:00:01.122) 0:03:01.501 *********** 2025-06-01 23:54:21.236800 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.236810 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.236820 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.236830 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.236840 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.236851 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.236857 | orchestrator | 2025-06-01 23:54:21.236864 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-01 23:54:21.236870 | orchestrator | Sunday 01 June 2025 23:46:18 +0000 (0:00:00.895) 0:03:02.397 *********** 2025-06-01 23:54:21.236880 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.236887 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.236893 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.236899 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.236905 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.236911 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.236929 | orchestrator | 2025-06-01 23:54:21.236935 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-01 23:54:21.236941 | orchestrator | Sunday 01 June 2025 23:46:19 +0000 (0:00:00.956) 0:03:03.354 *********** 2025-06-01 23:54:21.236947 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.236953 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.236960 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.236966 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.236972 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.236978 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.236984 | orchestrator | 2025-06-01 23:54:21.236990 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-01 23:54:21.236996 | orchestrator | Sunday 01 June 2025 23:46:19 +0000 (0:00:00.744) 0:03:04.098 *********** 2025-06-01 23:54:21.237002 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237008 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237014 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237020 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237026 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237033 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237039 | orchestrator | 2025-06-01 23:54:21.237045 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-01 23:54:21.237051 | orchestrator | Sunday 01 June 2025 23:46:21 +0000 (0:00:01.109) 0:03:05.207 *********** 2025-06-01 23:54:21.237057 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237063 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237070 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237076 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237082 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237088 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237094 | orchestrator | 2025-06-01 23:54:21.237100 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-01 23:54:21.237111 | orchestrator | Sunday 01 June 2025 23:46:21 +0000 (0:00:00.587) 0:03:05.795 *********** 2025-06-01 23:54:21.237117 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237127 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237133 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237139 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237145 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237152 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237158 | orchestrator | 2025-06-01 23:54:21.237164 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-01 23:54:21.237170 | orchestrator | Sunday 01 June 2025 23:46:22 +0000 (0:00:00.603) 0:03:06.399 *********** 2025-06-01 23:54:21.237176 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237182 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237188 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237194 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.237200 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.237207 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.237213 | orchestrator | 2025-06-01 23:54:21.237219 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-01 23:54:21.237225 | orchestrator | Sunday 01 June 2025 23:46:26 +0000 (0:00:03.923) 0:03:10.322 *********** 2025-06-01 23:54:21.237231 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237241 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237248 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237254 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.237260 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.237266 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.237272 | orchestrator | 2025-06-01 23:54:21.237278 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-01 23:54:21.237285 | orchestrator | Sunday 01 June 2025 23:46:27 +0000 (0:00:00.943) 0:03:11.266 *********** 2025-06-01 23:54:21.237291 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237297 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237303 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237309 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.237315 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.237321 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.237327 | orchestrator | 2025-06-01 23:54:21.237334 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-01 23:54:21.237340 | orchestrator | Sunday 01 June 2025 23:46:27 +0000 (0:00:00.662) 0:03:11.928 *********** 2025-06-01 23:54:21.237346 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237352 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237358 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237364 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237370 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237376 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237383 | orchestrator | 2025-06-01 23:54:21.237389 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-01 23:54:21.237395 | orchestrator | Sunday 01 June 2025 23:46:28 +0000 (0:00:00.862) 0:03:12.791 *********** 2025-06-01 23:54:21.237401 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237407 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237414 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237420 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.237426 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.237432 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.237438 | orchestrator | 2025-06-01 23:54:21.237445 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-01 23:54:21.237451 | orchestrator | Sunday 01 June 2025 23:46:29 +0000 (0:00:00.668) 0:03:13.459 *********** 2025-06-01 23:54:21.237457 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237463 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237469 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237477 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-01 23:54:21.237485 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-01 23:54:21.237493 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-01 23:54:21.237500 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-01 23:54:21.237509 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237515 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237529 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-01 23:54:21.237536 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-01 23:54:21.237543 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237549 | orchestrator | 2025-06-01 23:54:21.237555 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-01 23:54:21.237561 | orchestrator | Sunday 01 June 2025 23:46:30 +0000 (0:00:01.004) 0:03:14.464 *********** 2025-06-01 23:54:21.237567 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237574 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237580 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237586 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237592 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237598 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237604 | orchestrator | 2025-06-01 23:54:21.237610 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-01 23:54:21.237617 | orchestrator | Sunday 01 June 2025 23:46:30 +0000 (0:00:00.677) 0:03:15.142 *********** 2025-06-01 23:54:21.237623 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237629 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237635 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237641 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237647 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237653 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237659 | orchestrator | 2025-06-01 23:54:21.237666 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 23:54:21.237672 | orchestrator | Sunday 01 June 2025 23:46:31 +0000 (0:00:00.906) 0:03:16.048 *********** 2025-06-01 23:54:21.237678 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237684 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237690 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237696 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237702 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237708 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237715 | orchestrator | 2025-06-01 23:54:21.237721 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 23:54:21.237727 | orchestrator | Sunday 01 June 2025 23:46:32 +0000 (0:00:00.743) 0:03:16.792 *********** 2025-06-01 23:54:21.237733 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237739 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237745 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237751 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237758 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237764 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237770 | orchestrator | 2025-06-01 23:54:21.237776 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 23:54:21.237782 | orchestrator | Sunday 01 June 2025 23:46:33 +0000 (0:00:01.005) 0:03:17.798 *********** 2025-06-01 23:54:21.237792 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237798 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237804 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237810 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.237816 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.237822 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.237829 | orchestrator | 2025-06-01 23:54:21.237835 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 23:54:21.237841 | orchestrator | Sunday 01 June 2025 23:46:34 +0000 (0:00:00.799) 0:03:18.598 *********** 2025-06-01 23:54:21.237847 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237853 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.237859 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.237865 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.237872 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.237878 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.237884 | orchestrator | 2025-06-01 23:54:21.237890 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 23:54:21.237896 | orchestrator | Sunday 01 June 2025 23:46:35 +0000 (0:00:01.009) 0:03:19.608 *********** 2025-06-01 23:54:21.237902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 23:54:21.237909 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 23:54:21.237944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 23:54:21.237951 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.237957 | orchestrator | 2025-06-01 23:54:21.237963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 23:54:21.237969 | orchestrator | Sunday 01 June 2025 23:46:35 +0000 (0:00:00.357) 0:03:19.965 *********** 2025-06-01 23:54:21.237975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 23:54:21.237982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 23:54:21.237988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 23:54:21.237994 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238000 | orchestrator | 2025-06-01 23:54:21.238006 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 23:54:21.238013 | orchestrator | Sunday 01 June 2025 23:46:36 +0000 (0:00:00.349) 0:03:20.315 *********** 2025-06-01 23:54:21.238047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-01 23:54:21.238054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-01 23:54:21.238069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-01 23:54:21.238076 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238083 | orchestrator | 2025-06-01 23:54:21.238089 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 23:54:21.238096 | orchestrator | Sunday 01 June 2025 23:46:36 +0000 (0:00:00.374) 0:03:20.689 *********** 2025-06-01 23:54:21.238102 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238109 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.238115 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.238121 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.238127 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.238133 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.238139 | orchestrator | 2025-06-01 23:54:21.238146 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 23:54:21.238152 | orchestrator | Sunday 01 June 2025 23:46:37 +0000 (0:00:00.568) 0:03:21.258 *********** 2025-06-01 23:54:21.238159 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-01 23:54:21.238165 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238171 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-01 23:54:21.238177 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.238183 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-01 23:54:21.238197 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.238203 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 23:54:21.238209 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 23:54:21.238215 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 23:54:21.238221 | orchestrator | 2025-06-01 23:54:21.238228 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-01 23:54:21.238234 | orchestrator | Sunday 01 June 2025 23:46:38 +0000 (0:00:01.851) 0:03:23.109 *********** 2025-06-01 23:54:21.238240 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.238246 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.238252 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.238258 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.238264 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.238271 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.238277 | orchestrator | 2025-06-01 23:54:21.238283 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 23:54:21.238289 | orchestrator | Sunday 01 June 2025 23:46:41 +0000 (0:00:02.658) 0:03:25.768 *********** 2025-06-01 23:54:21.238296 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.238301 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.238306 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.238312 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.238317 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.238322 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.238328 | orchestrator | 2025-06-01 23:54:21.238333 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-01 23:54:21.238339 | orchestrator | Sunday 01 June 2025 23:46:42 +0000 (0:00:01.039) 0:03:26.807 *********** 2025-06-01 23:54:21.238344 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238349 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.238355 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.238360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.238366 | orchestrator | 2025-06-01 23:54:21.238371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-01 23:54:21.238377 | orchestrator | Sunday 01 June 2025 23:46:43 +0000 (0:00:01.166) 0:03:27.974 *********** 2025-06-01 23:54:21.238382 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.238387 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.238393 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.238398 | orchestrator | 2025-06-01 23:54:21.238404 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-01 23:54:21.238409 | orchestrator | Sunday 01 June 2025 23:46:44 +0000 (0:00:00.326) 0:03:28.301 *********** 2025-06-01 23:54:21.238414 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.238420 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.238425 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.238431 | orchestrator | 2025-06-01 23:54:21.238436 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-01 23:54:21.238442 | orchestrator | Sunday 01 June 2025 23:46:46 +0000 (0:00:01.909) 0:03:30.210 *********** 2025-06-01 23:54:21.238447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 23:54:21.238452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 23:54:21.238458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 23:54:21.238463 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238469 | orchestrator | 2025-06-01 23:54:21.238474 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-01 23:54:21.238480 | orchestrator | Sunday 01 June 2025 23:46:46 +0000 (0:00:00.610) 0:03:30.821 *********** 2025-06-01 23:54:21.238485 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.238490 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.238496 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.238506 | orchestrator | 2025-06-01 23:54:21.238511 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-01 23:54:21.238517 | orchestrator | Sunday 01 June 2025 23:46:47 +0000 (0:00:00.383) 0:03:31.205 *********** 2025-06-01 23:54:21.238522 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238528 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.238533 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.238538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.238544 | orchestrator | 2025-06-01 23:54:21.238549 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-01 23:54:21.238555 | orchestrator | Sunday 01 June 2025 23:46:48 +0000 (0:00:01.208) 0:03:32.414 *********** 2025-06-01 23:54:21.238560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.238566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.238583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.238592 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238598 | orchestrator | 2025-06-01 23:54:21.238604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-01 23:54:21.238609 | orchestrator | Sunday 01 June 2025 23:46:48 +0000 (0:00:00.375) 0:03:32.789 *********** 2025-06-01 23:54:21.238614 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238620 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.238625 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.238631 | orchestrator | 2025-06-01 23:54:21.238636 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-01 23:54:21.238641 | orchestrator | Sunday 01 June 2025 23:46:48 +0000 (0:00:00.345) 0:03:33.134 *********** 2025-06-01 23:54:21.238647 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238652 | orchestrator | 2025-06-01 23:54:21.238658 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-01 23:54:21.238663 | orchestrator | Sunday 01 June 2025 23:46:49 +0000 (0:00:00.220) 0:03:33.355 *********** 2025-06-01 23:54:21.238668 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238674 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.238679 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.238685 | orchestrator | 2025-06-01 23:54:21.238690 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-01 23:54:21.238696 | orchestrator | Sunday 01 June 2025 23:46:49 +0000 (0:00:00.299) 0:03:33.655 *********** 2025-06-01 23:54:21.238701 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238707 | orchestrator | 2025-06-01 23:54:21.238712 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-01 23:54:21.238717 | orchestrator | Sunday 01 June 2025 23:46:49 +0000 (0:00:00.213) 0:03:33.869 *********** 2025-06-01 23:54:21.238723 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238728 | orchestrator | 2025-06-01 23:54:21.238734 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-01 23:54:21.238739 | orchestrator | Sunday 01 June 2025 23:46:49 +0000 (0:00:00.241) 0:03:34.110 *********** 2025-06-01 23:54:21.238744 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238750 | orchestrator | 2025-06-01 23:54:21.238755 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-01 23:54:21.238761 | orchestrator | Sunday 01 June 2025 23:46:50 +0000 (0:00:00.426) 0:03:34.536 *********** 2025-06-01 23:54:21.238766 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238771 | orchestrator | 2025-06-01 23:54:21.238777 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-01 23:54:21.238782 | orchestrator | Sunday 01 June 2025 23:46:50 +0000 (0:00:00.236) 0:03:34.773 *********** 2025-06-01 23:54:21.238788 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238793 | orchestrator | 2025-06-01 23:54:21.238799 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-01 23:54:21.238808 | orchestrator | Sunday 01 June 2025 23:46:50 +0000 (0:00:00.218) 0:03:34.991 *********** 2025-06-01 23:54:21.238814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.238819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.238825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.238830 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238836 | orchestrator | 2025-06-01 23:54:21.238841 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-01 23:54:21.238846 | orchestrator | Sunday 01 June 2025 23:46:51 +0000 (0:00:00.398) 0:03:35.390 *********** 2025-06-01 23:54:21.238852 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238857 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.238863 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.238868 | orchestrator | 2025-06-01 23:54:21.238874 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-01 23:54:21.238879 | orchestrator | Sunday 01 June 2025 23:46:51 +0000 (0:00:00.384) 0:03:35.774 *********** 2025-06-01 23:54:21.238884 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238890 | orchestrator | 2025-06-01 23:54:21.238895 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-01 23:54:21.238900 | orchestrator | Sunday 01 June 2025 23:46:51 +0000 (0:00:00.228) 0:03:36.002 *********** 2025-06-01 23:54:21.238906 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.238921 | orchestrator | 2025-06-01 23:54:21.238926 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-01 23:54:21.238932 | orchestrator | Sunday 01 June 2025 23:46:52 +0000 (0:00:00.221) 0:03:36.224 *********** 2025-06-01 23:54:21.238937 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.238943 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.238948 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.238954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.238959 | orchestrator | 2025-06-01 23:54:21.238964 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-01 23:54:21.238970 | orchestrator | Sunday 01 June 2025 23:46:53 +0000 (0:00:01.153) 0:03:37.377 *********** 2025-06-01 23:54:21.238975 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.238981 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.238986 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.238991 | orchestrator | 2025-06-01 23:54:21.238997 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-01 23:54:21.239002 | orchestrator | Sunday 01 June 2025 23:46:53 +0000 (0:00:00.313) 0:03:37.691 *********** 2025-06-01 23:54:21.239007 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.239013 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.239018 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.239023 | orchestrator | 2025-06-01 23:54:21.239029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-01 23:54:21.239034 | orchestrator | Sunday 01 June 2025 23:46:54 +0000 (0:00:01.199) 0:03:38.890 *********** 2025-06-01 23:54:21.239039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.239049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.239060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.239066 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.239071 | orchestrator | 2025-06-01 23:54:21.239077 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-01 23:54:21.239082 | orchestrator | Sunday 01 June 2025 23:46:55 +0000 (0:00:01.114) 0:03:40.004 *********** 2025-06-01 23:54:21.239087 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.239093 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.239098 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.239108 | orchestrator | 2025-06-01 23:54:21.239113 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-01 23:54:21.239118 | orchestrator | Sunday 01 June 2025 23:46:56 +0000 (0:00:00.350) 0:03:40.355 *********** 2025-06-01 23:54:21.239124 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239129 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239134 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239140 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.239145 | orchestrator | 2025-06-01 23:54:21.239151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-01 23:54:21.239156 | orchestrator | Sunday 01 June 2025 23:46:57 +0000 (0:00:01.163) 0:03:41.518 *********** 2025-06-01 23:54:21.239161 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.239167 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.239172 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.239178 | orchestrator | 2025-06-01 23:54:21.239183 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-01 23:54:21.239189 | orchestrator | Sunday 01 June 2025 23:46:57 +0000 (0:00:00.377) 0:03:41.896 *********** 2025-06-01 23:54:21.239194 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.239199 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.239204 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.239210 | orchestrator | 2025-06-01 23:54:21.239215 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-01 23:54:21.239221 | orchestrator | Sunday 01 June 2025 23:46:59 +0000 (0:00:01.335) 0:03:43.232 *********** 2025-06-01 23:54:21.239226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.239231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.239237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.239242 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.239247 | orchestrator | 2025-06-01 23:54:21.239253 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-01 23:54:21.239258 | orchestrator | Sunday 01 June 2025 23:46:59 +0000 (0:00:00.857) 0:03:44.089 *********** 2025-06-01 23:54:21.239264 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.239269 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.239274 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.239280 | orchestrator | 2025-06-01 23:54:21.239285 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-01 23:54:21.239291 | orchestrator | Sunday 01 June 2025 23:47:00 +0000 (0:00:00.348) 0:03:44.437 *********** 2025-06-01 23:54:21.239296 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239301 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239307 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239312 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.239317 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.239322 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.239328 | orchestrator | 2025-06-01 23:54:21.239333 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-01 23:54:21.239338 | orchestrator | Sunday 01 June 2025 23:47:01 +0000 (0:00:00.880) 0:03:45.318 *********** 2025-06-01 23:54:21.239344 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.239349 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.239355 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.239360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.239365 | orchestrator | 2025-06-01 23:54:21.239371 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-01 23:54:21.239376 | orchestrator | Sunday 01 June 2025 23:47:02 +0000 (0:00:01.036) 0:03:46.355 *********** 2025-06-01 23:54:21.239382 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.239391 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.239397 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.239402 | orchestrator | 2025-06-01 23:54:21.239407 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-01 23:54:21.239413 | orchestrator | Sunday 01 June 2025 23:47:02 +0000 (0:00:00.307) 0:03:46.662 *********** 2025-06-01 23:54:21.239418 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.239424 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.239429 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.239434 | orchestrator | 2025-06-01 23:54:21.239440 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-01 23:54:21.239445 | orchestrator | Sunday 01 June 2025 23:47:03 +0000 (0:00:01.224) 0:03:47.887 *********** 2025-06-01 23:54:21.239450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 23:54:21.239456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 23:54:21.239461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 23:54:21.239467 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239472 | orchestrator | 2025-06-01 23:54:21.239477 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-01 23:54:21.239483 | orchestrator | Sunday 01 June 2025 23:47:04 +0000 (0:00:00.832) 0:03:48.720 *********** 2025-06-01 23:54:21.239488 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.239493 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.239500 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.239508 | orchestrator | 2025-06-01 23:54:21.239517 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-01 23:54:21.239525 | orchestrator | 2025-06-01 23:54:21.239537 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.239569 | orchestrator | Sunday 01 June 2025 23:47:05 +0000 (0:00:00.824) 0:03:49.545 *********** 2025-06-01 23:54:21.239578 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.239587 | orchestrator | 2025-06-01 23:54:21.239597 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.239606 | orchestrator | Sunday 01 June 2025 23:47:05 +0000 (0:00:00.526) 0:03:50.072 *********** 2025-06-01 23:54:21.239615 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.239624 | orchestrator | 2025-06-01 23:54:21.239632 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.239640 | orchestrator | Sunday 01 June 2025 23:47:06 +0000 (0:00:00.747) 0:03:50.820 *********** 2025-06-01 23:54:21.239649 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.239658 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.239668 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.239677 | orchestrator | 2025-06-01 23:54:21.239686 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.239696 | orchestrator | Sunday 01 June 2025 23:47:07 +0000 (0:00:00.688) 0:03:51.508 *********** 2025-06-01 23:54:21.239705 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239713 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239722 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239730 | orchestrator | 2025-06-01 23:54:21.239739 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.239748 | orchestrator | Sunday 01 June 2025 23:47:07 +0000 (0:00:00.317) 0:03:51.826 *********** 2025-06-01 23:54:21.239757 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239767 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239773 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239778 | orchestrator | 2025-06-01 23:54:21.239783 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.239789 | orchestrator | Sunday 01 June 2025 23:47:07 +0000 (0:00:00.305) 0:03:52.131 *********** 2025-06-01 23:54:21.239800 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239806 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239811 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239816 | orchestrator | 2025-06-01 23:54:21.239822 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.239827 | orchestrator | Sunday 01 June 2025 23:47:08 +0000 (0:00:00.590) 0:03:52.722 *********** 2025-06-01 23:54:21.239833 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.239838 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.239843 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.239849 | orchestrator | 2025-06-01 23:54:21.239854 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.239859 | orchestrator | Sunday 01 June 2025 23:47:09 +0000 (0:00:00.794) 0:03:53.517 *********** 2025-06-01 23:54:21.239865 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239870 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239875 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239881 | orchestrator | 2025-06-01 23:54:21.239886 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.239891 | orchestrator | Sunday 01 June 2025 23:47:09 +0000 (0:00:00.499) 0:03:54.017 *********** 2025-06-01 23:54:21.239897 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.239902 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.239908 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.239954 | orchestrator | 2025-06-01 23:54:21.239961 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.239967 | orchestrator | Sunday 01 June 2025 23:47:10 +0000 (0:00:00.310) 0:03:54.327 *********** 2025-06-01 23:54:21.239972 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.239978 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.239983 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.239988 | orchestrator | 2025-06-01 23:54:21.239994 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.239999 | orchestrator | Sunday 01 June 2025 23:47:11 +0000 (0:00:00.937) 0:03:55.265 *********** 2025-06-01 23:54:21.240004 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240010 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240015 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240020 | orchestrator | 2025-06-01 23:54:21.240026 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.240031 | orchestrator | Sunday 01 June 2025 23:47:11 +0000 (0:00:00.785) 0:03:56.051 *********** 2025-06-01 23:54:21.240036 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240041 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240045 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240050 | orchestrator | 2025-06-01 23:54:21.240055 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.240060 | orchestrator | Sunday 01 June 2025 23:47:12 +0000 (0:00:00.279) 0:03:56.330 *********** 2025-06-01 23:54:21.240064 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240069 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240074 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240079 | orchestrator | 2025-06-01 23:54:21.240083 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.240088 | orchestrator | Sunday 01 June 2025 23:47:12 +0000 (0:00:00.307) 0:03:56.638 *********** 2025-06-01 23:54:21.240093 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240098 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240102 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240107 | orchestrator | 2025-06-01 23:54:21.240112 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.240117 | orchestrator | Sunday 01 June 2025 23:47:12 +0000 (0:00:00.450) 0:03:57.089 *********** 2025-06-01 23:54:21.240121 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240130 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240135 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240140 | orchestrator | 2025-06-01 23:54:21.240150 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.240159 | orchestrator | Sunday 01 June 2025 23:47:13 +0000 (0:00:00.376) 0:03:57.465 *********** 2025-06-01 23:54:21.240164 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240169 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240173 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240178 | orchestrator | 2025-06-01 23:54:21.240183 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.240188 | orchestrator | Sunday 01 June 2025 23:47:13 +0000 (0:00:00.469) 0:03:57.934 *********** 2025-06-01 23:54:21.240192 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240197 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240202 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240207 | orchestrator | 2025-06-01 23:54:21.240211 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.240216 | orchestrator | Sunday 01 June 2025 23:47:14 +0000 (0:00:00.346) 0:03:58.281 *********** 2025-06-01 23:54:21.240221 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240226 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240230 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240235 | orchestrator | 2025-06-01 23:54:21.240240 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.240245 | orchestrator | Sunday 01 June 2025 23:47:14 +0000 (0:00:00.558) 0:03:58.840 *********** 2025-06-01 23:54:21.240250 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240255 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240259 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240264 | orchestrator | 2025-06-01 23:54:21.240269 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.240274 | orchestrator | Sunday 01 June 2025 23:47:15 +0000 (0:00:00.389) 0:03:59.229 *********** 2025-06-01 23:54:21.240278 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240283 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240288 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240293 | orchestrator | 2025-06-01 23:54:21.240297 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.240302 | orchestrator | Sunday 01 June 2025 23:47:15 +0000 (0:00:00.431) 0:03:59.660 *********** 2025-06-01 23:54:21.240307 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240312 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240316 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240321 | orchestrator | 2025-06-01 23:54:21.240326 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-01 23:54:21.240331 | orchestrator | Sunday 01 June 2025 23:47:16 +0000 (0:00:00.880) 0:04:00.541 *********** 2025-06-01 23:54:21.240336 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240340 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240345 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240350 | orchestrator | 2025-06-01 23:54:21.240355 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-01 23:54:21.240359 | orchestrator | Sunday 01 June 2025 23:47:16 +0000 (0:00:00.379) 0:04:00.921 *********** 2025-06-01 23:54:21.240364 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.240369 | orchestrator | 2025-06-01 23:54:21.240374 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-01 23:54:21.240379 | orchestrator | Sunday 01 June 2025 23:47:17 +0000 (0:00:00.626) 0:04:01.547 *********** 2025-06-01 23:54:21.240384 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240388 | orchestrator | 2025-06-01 23:54:21.240393 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-01 23:54:21.240401 | orchestrator | Sunday 01 June 2025 23:47:17 +0000 (0:00:00.147) 0:04:01.695 *********** 2025-06-01 23:54:21.240406 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-01 23:54:21.240411 | orchestrator | 2025-06-01 23:54:21.240416 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-01 23:54:21.240421 | orchestrator | Sunday 01 June 2025 23:47:19 +0000 (0:00:01.753) 0:04:03.448 *********** 2025-06-01 23:54:21.240426 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240430 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240435 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240440 | orchestrator | 2025-06-01 23:54:21.240445 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-01 23:54:21.240449 | orchestrator | Sunday 01 June 2025 23:47:19 +0000 (0:00:00.356) 0:04:03.805 *********** 2025-06-01 23:54:21.240454 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240459 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240464 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240468 | orchestrator | 2025-06-01 23:54:21.240473 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-01 23:54:21.240478 | orchestrator | Sunday 01 June 2025 23:47:20 +0000 (0:00:00.433) 0:04:04.239 *********** 2025-06-01 23:54:21.240483 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240487 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.240492 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.240497 | orchestrator | 2025-06-01 23:54:21.240502 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-01 23:54:21.240507 | orchestrator | Sunday 01 June 2025 23:47:21 +0000 (0:00:01.302) 0:04:05.541 *********** 2025-06-01 23:54:21.240511 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240516 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.240521 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.240526 | orchestrator | 2025-06-01 23:54:21.240530 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-01 23:54:21.240535 | orchestrator | Sunday 01 June 2025 23:47:22 +0000 (0:00:01.225) 0:04:06.766 *********** 2025-06-01 23:54:21.240540 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240545 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.240550 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.240554 | orchestrator | 2025-06-01 23:54:21.240559 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-01 23:54:21.240564 | orchestrator | Sunday 01 June 2025 23:47:23 +0000 (0:00:00.709) 0:04:07.476 *********** 2025-06-01 23:54:21.240569 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240574 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240582 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240587 | orchestrator | 2025-06-01 23:54:21.240595 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-01 23:54:21.240600 | orchestrator | Sunday 01 June 2025 23:47:24 +0000 (0:00:00.767) 0:04:08.243 *********** 2025-06-01 23:54:21.240605 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240609 | orchestrator | 2025-06-01 23:54:21.240614 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-01 23:54:21.240619 | orchestrator | Sunday 01 June 2025 23:47:25 +0000 (0:00:01.308) 0:04:09.552 *********** 2025-06-01 23:54:21.240624 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240628 | orchestrator | 2025-06-01 23:54:21.240633 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-01 23:54:21.240638 | orchestrator | Sunday 01 June 2025 23:47:26 +0000 (0:00:00.785) 0:04:10.337 *********** 2025-06-01 23:54:21.240643 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:54:21.240647 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.240652 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.240657 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:54:21.240666 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-01 23:54:21.240671 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:54:21.240676 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:54:21.240680 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-01 23:54:21.240685 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:54:21.240690 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-01 23:54:21.240695 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-01 23:54:21.240700 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-01 23:54:21.240704 | orchestrator | 2025-06-01 23:54:21.240709 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-01 23:54:21.240714 | orchestrator | Sunday 01 June 2025 23:47:29 +0000 (0:00:03.713) 0:04:14.051 *********** 2025-06-01 23:54:21.240719 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240723 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.240728 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.240733 | orchestrator | 2025-06-01 23:54:21.240738 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-01 23:54:21.240743 | orchestrator | Sunday 01 June 2025 23:47:31 +0000 (0:00:01.495) 0:04:15.546 *********** 2025-06-01 23:54:21.240747 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240752 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240757 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240762 | orchestrator | 2025-06-01 23:54:21.240766 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-01 23:54:21.240771 | orchestrator | Sunday 01 June 2025 23:47:31 +0000 (0:00:00.345) 0:04:15.892 *********** 2025-06-01 23:54:21.240776 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.240781 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.240785 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.240790 | orchestrator | 2025-06-01 23:54:21.240795 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-01 23:54:21.240800 | orchestrator | Sunday 01 June 2025 23:47:32 +0000 (0:00:00.302) 0:04:16.195 *********** 2025-06-01 23:54:21.240805 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240809 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.240814 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.240819 | orchestrator | 2025-06-01 23:54:21.240824 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-01 23:54:21.240828 | orchestrator | Sunday 01 June 2025 23:47:33 +0000 (0:00:01.893) 0:04:18.088 *********** 2025-06-01 23:54:21.240833 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.240838 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.240843 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.240847 | orchestrator | 2025-06-01 23:54:21.240852 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-01 23:54:21.240857 | orchestrator | Sunday 01 June 2025 23:47:35 +0000 (0:00:01.442) 0:04:19.531 *********** 2025-06-01 23:54:21.240862 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240867 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240871 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240876 | orchestrator | 2025-06-01 23:54:21.240881 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-01 23:54:21.240886 | orchestrator | Sunday 01 June 2025 23:47:35 +0000 (0:00:00.273) 0:04:19.804 *********** 2025-06-01 23:54:21.240890 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.240895 | orchestrator | 2025-06-01 23:54:21.240900 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-01 23:54:21.240905 | orchestrator | Sunday 01 June 2025 23:47:36 +0000 (0:00:00.504) 0:04:20.309 *********** 2025-06-01 23:54:21.240910 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240935 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240940 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240945 | orchestrator | 2025-06-01 23:54:21.240950 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-01 23:54:21.240955 | orchestrator | Sunday 01 June 2025 23:47:36 +0000 (0:00:00.484) 0:04:20.793 *********** 2025-06-01 23:54:21.240959 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.240964 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.240969 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.240974 | orchestrator | 2025-06-01 23:54:21.240978 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-01 23:54:21.240983 | orchestrator | Sunday 01 June 2025 23:47:36 +0000 (0:00:00.294) 0:04:21.088 *********** 2025-06-01 23:54:21.240992 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.240997 | orchestrator | 2025-06-01 23:54:21.241007 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-01 23:54:21.241012 | orchestrator | Sunday 01 June 2025 23:47:37 +0000 (0:00:00.501) 0:04:21.589 *********** 2025-06-01 23:54:21.241017 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.241021 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.241026 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.241031 | orchestrator | 2025-06-01 23:54:21.241036 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-01 23:54:21.241040 | orchestrator | Sunday 01 June 2025 23:47:39 +0000 (0:00:02.260) 0:04:23.850 *********** 2025-06-01 23:54:21.241045 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.241050 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.241055 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.241059 | orchestrator | 2025-06-01 23:54:21.241064 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-01 23:54:21.241069 | orchestrator | Sunday 01 June 2025 23:47:41 +0000 (0:00:01.308) 0:04:25.158 *********** 2025-06-01 23:54:21.241074 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.241079 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.241084 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.241088 | orchestrator | 2025-06-01 23:54:21.241093 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-01 23:54:21.241098 | orchestrator | Sunday 01 June 2025 23:47:42 +0000 (0:00:01.615) 0:04:26.774 *********** 2025-06-01 23:54:21.241103 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.241108 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.241112 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.241117 | orchestrator | 2025-06-01 23:54:21.241122 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-01 23:54:21.241127 | orchestrator | Sunday 01 June 2025 23:47:44 +0000 (0:00:02.056) 0:04:28.831 *********** 2025-06-01 23:54:21.241131 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.241136 | orchestrator | 2025-06-01 23:54:21.241141 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-01 23:54:21.241146 | orchestrator | Sunday 01 June 2025 23:47:45 +0000 (0:00:00.992) 0:04:29.823 *********** 2025-06-01 23:54:21.241151 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-01 23:54:21.241155 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241160 | orchestrator | 2025-06-01 23:54:21.241165 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-01 23:54:21.241170 | orchestrator | Sunday 01 June 2025 23:48:07 +0000 (0:00:21.938) 0:04:51.762 *********** 2025-06-01 23:54:21.241175 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241179 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241184 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241189 | orchestrator | 2025-06-01 23:54:21.241198 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-01 23:54:21.241203 | orchestrator | Sunday 01 June 2025 23:48:17 +0000 (0:00:09.808) 0:05:01.570 *********** 2025-06-01 23:54:21.241208 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241213 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241218 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241222 | orchestrator | 2025-06-01 23:54:21.241227 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-01 23:54:21.241232 | orchestrator | Sunday 01 June 2025 23:48:17 +0000 (0:00:00.541) 0:05:02.112 *********** 2025-06-01 23:54:21.241238 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-01 23:54:21.241245 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-01 23:54:21.241252 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-01 23:54:21.241258 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-01 23:54:21.241270 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-01 23:54:21.241276 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__44131835835fe9023462bb2872ae3d2f1b793328'}])  2025-06-01 23:54:21.241284 | orchestrator | 2025-06-01 23:54:21.241289 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 23:54:21.241293 | orchestrator | Sunday 01 June 2025 23:48:32 +0000 (0:00:14.426) 0:05:16.538 *********** 2025-06-01 23:54:21.241298 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241303 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241308 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241312 | orchestrator | 2025-06-01 23:54:21.241317 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-01 23:54:21.241322 | orchestrator | Sunday 01 June 2025 23:48:32 +0000 (0:00:00.347) 0:05:16.885 *********** 2025-06-01 23:54:21.241327 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.241331 | orchestrator | 2025-06-01 23:54:21.241336 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-01 23:54:21.241345 | orchestrator | Sunday 01 June 2025 23:48:33 +0000 (0:00:00.786) 0:05:17.672 *********** 2025-06-01 23:54:21.241350 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241355 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241360 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241364 | orchestrator | 2025-06-01 23:54:21.241369 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-01 23:54:21.241374 | orchestrator | Sunday 01 June 2025 23:48:33 +0000 (0:00:00.359) 0:05:18.031 *********** 2025-06-01 23:54:21.241379 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241383 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241388 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241393 | orchestrator | 2025-06-01 23:54:21.241398 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-01 23:54:21.241402 | orchestrator | Sunday 01 June 2025 23:48:34 +0000 (0:00:00.352) 0:05:18.384 *********** 2025-06-01 23:54:21.241407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 23:54:21.241412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 23:54:21.241417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 23:54:21.241422 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241426 | orchestrator | 2025-06-01 23:54:21.241431 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-01 23:54:21.241436 | orchestrator | Sunday 01 June 2025 23:48:35 +0000 (0:00:00.892) 0:05:19.277 *********** 2025-06-01 23:54:21.241441 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241446 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241450 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241455 | orchestrator | 2025-06-01 23:54:21.241460 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-01 23:54:21.241465 | orchestrator | 2025-06-01 23:54:21.241469 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.241474 | orchestrator | Sunday 01 June 2025 23:48:36 +0000 (0:00:00.905) 0:05:20.183 *********** 2025-06-01 23:54:21.241479 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.241484 | orchestrator | 2025-06-01 23:54:21.241489 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.241493 | orchestrator | Sunday 01 June 2025 23:48:36 +0000 (0:00:00.543) 0:05:20.726 *********** 2025-06-01 23:54:21.241498 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.241503 | orchestrator | 2025-06-01 23:54:21.241508 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.241513 | orchestrator | Sunday 01 June 2025 23:48:37 +0000 (0:00:00.777) 0:05:21.504 *********** 2025-06-01 23:54:21.241517 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241522 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241527 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241532 | orchestrator | 2025-06-01 23:54:21.241537 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.241541 | orchestrator | Sunday 01 June 2025 23:48:38 +0000 (0:00:00.769) 0:05:22.274 *********** 2025-06-01 23:54:21.241546 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241551 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241556 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241560 | orchestrator | 2025-06-01 23:54:21.241565 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.241570 | orchestrator | Sunday 01 June 2025 23:48:38 +0000 (0:00:00.293) 0:05:22.567 *********** 2025-06-01 23:54:21.241575 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241579 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241590 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241595 | orchestrator | 2025-06-01 23:54:21.241600 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.241605 | orchestrator | Sunday 01 June 2025 23:48:38 +0000 (0:00:00.564) 0:05:23.132 *********** 2025-06-01 23:54:21.241609 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241617 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241625 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241630 | orchestrator | 2025-06-01 23:54:21.241635 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.241640 | orchestrator | Sunday 01 June 2025 23:48:39 +0000 (0:00:00.299) 0:05:23.432 *********** 2025-06-01 23:54:21.241645 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241649 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241654 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241659 | orchestrator | 2025-06-01 23:54:21.241664 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.241669 | orchestrator | Sunday 01 June 2025 23:48:39 +0000 (0:00:00.709) 0:05:24.141 *********** 2025-06-01 23:54:21.241673 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241678 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241683 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241688 | orchestrator | 2025-06-01 23:54:21.241692 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.241697 | orchestrator | Sunday 01 June 2025 23:48:40 +0000 (0:00:00.301) 0:05:24.443 *********** 2025-06-01 23:54:21.241702 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241706 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241711 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241716 | orchestrator | 2025-06-01 23:54:21.241721 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.241726 | orchestrator | Sunday 01 June 2025 23:48:40 +0000 (0:00:00.579) 0:05:25.023 *********** 2025-06-01 23:54:21.241730 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241735 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241740 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241745 | orchestrator | 2025-06-01 23:54:21.241749 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.241754 | orchestrator | Sunday 01 June 2025 23:48:41 +0000 (0:00:00.658) 0:05:25.682 *********** 2025-06-01 23:54:21.241759 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241764 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241768 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241773 | orchestrator | 2025-06-01 23:54:21.241778 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.241783 | orchestrator | Sunday 01 June 2025 23:48:42 +0000 (0:00:00.739) 0:05:26.422 *********** 2025-06-01 23:54:21.241787 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241792 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241797 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241802 | orchestrator | 2025-06-01 23:54:21.241806 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.241811 | orchestrator | Sunday 01 June 2025 23:48:42 +0000 (0:00:00.304) 0:05:26.726 *********** 2025-06-01 23:54:21.241816 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.241821 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.241825 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.241830 | orchestrator | 2025-06-01 23:54:21.241835 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.241840 | orchestrator | Sunday 01 June 2025 23:48:43 +0000 (0:00:00.592) 0:05:27.318 *********** 2025-06-01 23:54:21.241844 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241849 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241854 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241859 | orchestrator | 2025-06-01 23:54:21.241867 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.241872 | orchestrator | Sunday 01 June 2025 23:48:43 +0000 (0:00:00.291) 0:05:27.609 *********** 2025-06-01 23:54:21.241876 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241881 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241886 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241891 | orchestrator | 2025-06-01 23:54:21.241896 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.241900 | orchestrator | Sunday 01 June 2025 23:48:43 +0000 (0:00:00.303) 0:05:27.913 *********** 2025-06-01 23:54:21.241905 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241910 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241932 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241937 | orchestrator | 2025-06-01 23:54:21.241942 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.241947 | orchestrator | Sunday 01 June 2025 23:48:44 +0000 (0:00:00.319) 0:05:28.232 *********** 2025-06-01 23:54:21.241951 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241956 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241961 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241966 | orchestrator | 2025-06-01 23:54:21.241971 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.241975 | orchestrator | Sunday 01 June 2025 23:48:44 +0000 (0:00:00.575) 0:05:28.807 *********** 2025-06-01 23:54:21.241980 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.241985 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.241990 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.241994 | orchestrator | 2025-06-01 23:54:21.241999 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.242004 | orchestrator | Sunday 01 June 2025 23:48:44 +0000 (0:00:00.316) 0:05:29.123 *********** 2025-06-01 23:54:21.242009 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.242095 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.242103 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242108 | orchestrator | 2025-06-01 23:54:21.242113 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.242118 | orchestrator | Sunday 01 June 2025 23:48:45 +0000 (0:00:00.350) 0:05:29.474 *********** 2025-06-01 23:54:21.242122 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.242127 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.242134 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242142 | orchestrator | 2025-06-01 23:54:21.242149 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.242157 | orchestrator | Sunday 01 June 2025 23:48:45 +0000 (0:00:00.346) 0:05:29.820 *********** 2025-06-01 23:54:21.242165 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.242173 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.242181 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242190 | orchestrator | 2025-06-01 23:54:21.242219 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-01 23:54:21.242226 | orchestrator | Sunday 01 June 2025 23:48:46 +0000 (0:00:01.020) 0:05:30.841 *********** 2025-06-01 23:54:21.242231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:54:21.242236 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:54:21.242241 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:54:21.242245 | orchestrator | 2025-06-01 23:54:21.242250 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-01 23:54:21.242255 | orchestrator | Sunday 01 June 2025 23:48:47 +0000 (0:00:00.645) 0:05:31.486 *********** 2025-06-01 23:54:21.242260 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.242265 | orchestrator | 2025-06-01 23:54:21.242269 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-01 23:54:21.242279 | orchestrator | Sunday 01 June 2025 23:48:47 +0000 (0:00:00.564) 0:05:32.051 *********** 2025-06-01 23:54:21.242284 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.242289 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.242293 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.242298 | orchestrator | 2025-06-01 23:54:21.242303 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-01 23:54:21.242308 | orchestrator | Sunday 01 June 2025 23:48:48 +0000 (0:00:00.960) 0:05:33.011 *********** 2025-06-01 23:54:21.242312 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.242317 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.242322 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.242327 | orchestrator | 2025-06-01 23:54:21.242331 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-01 23:54:21.242336 | orchestrator | Sunday 01 June 2025 23:48:49 +0000 (0:00:00.320) 0:05:33.332 *********** 2025-06-01 23:54:21.242341 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:54:21.242346 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:54:21.242351 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:54:21.242355 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-01 23:54:21.242360 | orchestrator | 2025-06-01 23:54:21.242365 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-01 23:54:21.242370 | orchestrator | Sunday 01 June 2025 23:48:59 +0000 (0:00:10.193) 0:05:43.525 *********** 2025-06-01 23:54:21.242374 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.242379 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.242384 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242389 | orchestrator | 2025-06-01 23:54:21.242393 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-01 23:54:21.242398 | orchestrator | Sunday 01 June 2025 23:48:59 +0000 (0:00:00.364) 0:05:43.890 *********** 2025-06-01 23:54:21.242403 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 23:54:21.242408 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 23:54:21.242412 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 23:54:21.242418 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-01 23:54:21.242422 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.242427 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.242432 | orchestrator | 2025-06-01 23:54:21.242437 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-01 23:54:21.242441 | orchestrator | Sunday 01 June 2025 23:49:02 +0000 (0:00:02.650) 0:05:46.540 *********** 2025-06-01 23:54:21.242446 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-01 23:54:21.242451 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-01 23:54:21.242456 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-01 23:54:21.242460 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-01 23:54:21.242465 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-01 23:54:21.242470 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-01 23:54:21.242475 | orchestrator | 2025-06-01 23:54:21.242479 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-01 23:54:21.242484 | orchestrator | Sunday 01 June 2025 23:49:03 +0000 (0:00:01.180) 0:05:47.721 *********** 2025-06-01 23:54:21.242489 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.242494 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.242499 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242503 | orchestrator | 2025-06-01 23:54:21.242508 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-01 23:54:21.242513 | orchestrator | Sunday 01 June 2025 23:49:04 +0000 (0:00:00.670) 0:05:48.391 *********** 2025-06-01 23:54:21.242518 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.242527 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.242531 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.242536 | orchestrator | 2025-06-01 23:54:21.242541 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-01 23:54:21.242546 | orchestrator | Sunday 01 June 2025 23:49:04 +0000 (0:00:00.299) 0:05:48.691 *********** 2025-06-01 23:54:21.242551 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.242555 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.242560 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.242565 | orchestrator | 2025-06-01 23:54:21.242570 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-01 23:54:21.242574 | orchestrator | Sunday 01 June 2025 23:49:04 +0000 (0:00:00.296) 0:05:48.987 *********** 2025-06-01 23:54:21.242579 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.242584 | orchestrator | 2025-06-01 23:54:21.242589 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-01 23:54:21.242609 | orchestrator | Sunday 01 June 2025 23:49:05 +0000 (0:00:00.823) 0:05:49.811 *********** 2025-06-01 23:54:21.242618 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.242623 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.242628 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.242632 | orchestrator | 2025-06-01 23:54:21.242637 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-01 23:54:21.242642 | orchestrator | Sunday 01 June 2025 23:49:05 +0000 (0:00:00.328) 0:05:50.139 *********** 2025-06-01 23:54:21.242647 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.242651 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.242656 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.242661 | orchestrator | 2025-06-01 23:54:21.242666 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-01 23:54:21.242670 | orchestrator | Sunday 01 June 2025 23:49:06 +0000 (0:00:00.340) 0:05:50.480 *********** 2025-06-01 23:54:21.242675 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.242680 | orchestrator | 2025-06-01 23:54:21.242685 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-01 23:54:21.242689 | orchestrator | Sunday 01 June 2025 23:49:07 +0000 (0:00:00.767) 0:05:51.247 *********** 2025-06-01 23:54:21.242694 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.242699 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.242703 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.242708 | orchestrator | 2025-06-01 23:54:21.242713 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-01 23:54:21.242718 | orchestrator | Sunday 01 June 2025 23:49:08 +0000 (0:00:01.210) 0:05:52.457 *********** 2025-06-01 23:54:21.242722 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.242727 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.242732 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.242736 | orchestrator | 2025-06-01 23:54:21.242741 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-01 23:54:21.242746 | orchestrator | Sunday 01 June 2025 23:49:09 +0000 (0:00:01.162) 0:05:53.619 *********** 2025-06-01 23:54:21.242751 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.242755 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.242760 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.242765 | orchestrator | 2025-06-01 23:54:21.242769 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-01 23:54:21.242774 | orchestrator | Sunday 01 June 2025 23:49:11 +0000 (0:00:02.040) 0:05:55.660 *********** 2025-06-01 23:54:21.242779 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.242784 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.242788 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.242797 | orchestrator | 2025-06-01 23:54:21.242802 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-01 23:54:21.242806 | orchestrator | Sunday 01 June 2025 23:49:13 +0000 (0:00:01.951) 0:05:57.612 *********** 2025-06-01 23:54:21.242811 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.242816 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.242820 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-01 23:54:21.242825 | orchestrator | 2025-06-01 23:54:21.242830 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-01 23:54:21.242835 | orchestrator | Sunday 01 June 2025 23:49:13 +0000 (0:00:00.399) 0:05:58.011 *********** 2025-06-01 23:54:21.242839 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-01 23:54:21.242844 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-01 23:54:21.242849 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-01 23:54:21.242854 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-01 23:54:21.242858 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.242863 | orchestrator | 2025-06-01 23:54:21.242868 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-01 23:54:21.242873 | orchestrator | Sunday 01 June 2025 23:49:38 +0000 (0:00:24.435) 0:06:22.447 *********** 2025-06-01 23:54:21.242877 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.242882 | orchestrator | 2025-06-01 23:54:21.242887 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-01 23:54:21.242892 | orchestrator | Sunday 01 June 2025 23:49:39 +0000 (0:00:01.564) 0:06:24.012 *********** 2025-06-01 23:54:21.242896 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242901 | orchestrator | 2025-06-01 23:54:21.242906 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-01 23:54:21.242910 | orchestrator | Sunday 01 June 2025 23:49:40 +0000 (0:00:00.837) 0:06:24.849 *********** 2025-06-01 23:54:21.242951 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.242956 | orchestrator | 2025-06-01 23:54:21.242961 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-01 23:54:21.242966 | orchestrator | Sunday 01 June 2025 23:49:40 +0000 (0:00:00.139) 0:06:24.988 *********** 2025-06-01 23:54:21.242970 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-01 23:54:21.242975 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-01 23:54:21.242980 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-01 23:54:21.242985 | orchestrator | 2025-06-01 23:54:21.242989 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-01 23:54:21.242994 | orchestrator | Sunday 01 June 2025 23:49:47 +0000 (0:00:06.373) 0:06:31.362 *********** 2025-06-01 23:54:21.242999 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-01 23:54:21.243025 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-01 23:54:21.243031 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-01 23:54:21.243035 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-01 23:54:21.243040 | orchestrator | 2025-06-01 23:54:21.243045 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 23:54:21.243050 | orchestrator | Sunday 01 June 2025 23:49:51 +0000 (0:00:04.574) 0:06:35.936 *********** 2025-06-01 23:54:21.243054 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.243059 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.243064 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.243073 | orchestrator | 2025-06-01 23:54:21.243078 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-01 23:54:21.243083 | orchestrator | Sunday 01 June 2025 23:49:52 +0000 (0:00:00.934) 0:06:36.870 *********** 2025-06-01 23:54:21.243087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:54:21.243092 | orchestrator | 2025-06-01 23:54:21.243097 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-01 23:54:21.243102 | orchestrator | Sunday 01 June 2025 23:49:53 +0000 (0:00:00.537) 0:06:37.407 *********** 2025-06-01 23:54:21.243106 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.243111 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.243116 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.243121 | orchestrator | 2025-06-01 23:54:21.243125 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-01 23:54:21.243130 | orchestrator | Sunday 01 June 2025 23:49:53 +0000 (0:00:00.365) 0:06:37.773 *********** 2025-06-01 23:54:21.243135 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.243140 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.243144 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.243149 | orchestrator | 2025-06-01 23:54:21.243154 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-01 23:54:21.243159 | orchestrator | Sunday 01 June 2025 23:49:55 +0000 (0:00:01.437) 0:06:39.210 *********** 2025-06-01 23:54:21.243163 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-01 23:54:21.243168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-01 23:54:21.243173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-01 23:54:21.243178 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.243182 | orchestrator | 2025-06-01 23:54:21.243187 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-01 23:54:21.243192 | orchestrator | Sunday 01 June 2025 23:49:55 +0000 (0:00:00.638) 0:06:39.848 *********** 2025-06-01 23:54:21.243196 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.243201 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.243206 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.243211 | orchestrator | 2025-06-01 23:54:21.243215 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-01 23:54:21.243220 | orchestrator | 2025-06-01 23:54:21.243225 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.243230 | orchestrator | Sunday 01 June 2025 23:49:56 +0000 (0:00:00.554) 0:06:40.402 *********** 2025-06-01 23:54:21.243234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.243239 | orchestrator | 2025-06-01 23:54:21.243244 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.243249 | orchestrator | Sunday 01 June 2025 23:49:56 +0000 (0:00:00.740) 0:06:41.143 *********** 2025-06-01 23:54:21.243254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.243259 | orchestrator | 2025-06-01 23:54:21.243263 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.243268 | orchestrator | Sunday 01 June 2025 23:49:57 +0000 (0:00:00.547) 0:06:41.691 *********** 2025-06-01 23:54:21.243273 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243278 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243282 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243286 | orchestrator | 2025-06-01 23:54:21.243291 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.243295 | orchestrator | Sunday 01 June 2025 23:49:57 +0000 (0:00:00.295) 0:06:41.986 *********** 2025-06-01 23:54:21.243300 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243304 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243312 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243317 | orchestrator | 2025-06-01 23:54:21.243321 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.243326 | orchestrator | Sunday 01 June 2025 23:49:58 +0000 (0:00:00.933) 0:06:42.920 *********** 2025-06-01 23:54:21.243330 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243335 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243339 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243344 | orchestrator | 2025-06-01 23:54:21.243348 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.243353 | orchestrator | Sunday 01 June 2025 23:49:59 +0000 (0:00:00.658) 0:06:43.579 *********** 2025-06-01 23:54:21.243357 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243362 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243366 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243370 | orchestrator | 2025-06-01 23:54:21.243375 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.243379 | orchestrator | Sunday 01 June 2025 23:50:00 +0000 (0:00:00.693) 0:06:44.272 *********** 2025-06-01 23:54:21.243384 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243389 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243393 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243398 | orchestrator | 2025-06-01 23:54:21.243402 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.243421 | orchestrator | Sunday 01 June 2025 23:50:00 +0000 (0:00:00.342) 0:06:44.615 *********** 2025-06-01 23:54:21.243429 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243434 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243438 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243443 | orchestrator | 2025-06-01 23:54:21.243447 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.243452 | orchestrator | Sunday 01 June 2025 23:50:01 +0000 (0:00:00.558) 0:06:45.173 *********** 2025-06-01 23:54:21.243456 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243461 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243465 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243470 | orchestrator | 2025-06-01 23:54:21.243474 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.243479 | orchestrator | Sunday 01 June 2025 23:50:01 +0000 (0:00:00.316) 0:06:45.490 *********** 2025-06-01 23:54:21.243483 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243488 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243492 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243497 | orchestrator | 2025-06-01 23:54:21.243501 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.243506 | orchestrator | Sunday 01 June 2025 23:50:02 +0000 (0:00:00.664) 0:06:46.155 *********** 2025-06-01 23:54:21.243510 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243514 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243519 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243523 | orchestrator | 2025-06-01 23:54:21.243528 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.243532 | orchestrator | Sunday 01 June 2025 23:50:02 +0000 (0:00:00.690) 0:06:46.845 *********** 2025-06-01 23:54:21.243537 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243541 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243546 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243550 | orchestrator | 2025-06-01 23:54:21.243555 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.243559 | orchestrator | Sunday 01 June 2025 23:50:03 +0000 (0:00:00.566) 0:06:47.411 *********** 2025-06-01 23:54:21.243564 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243568 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243573 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243577 | orchestrator | 2025-06-01 23:54:21.243585 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.243590 | orchestrator | Sunday 01 June 2025 23:50:03 +0000 (0:00:00.318) 0:06:47.729 *********** 2025-06-01 23:54:21.243594 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243599 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243603 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243608 | orchestrator | 2025-06-01 23:54:21.243612 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.243617 | orchestrator | Sunday 01 June 2025 23:50:04 +0000 (0:00:00.462) 0:06:48.192 *********** 2025-06-01 23:54:21.243621 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243626 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243630 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243635 | orchestrator | 2025-06-01 23:54:21.243639 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.243644 | orchestrator | Sunday 01 June 2025 23:50:04 +0000 (0:00:00.301) 0:06:48.494 *********** 2025-06-01 23:54:21.243648 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243653 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243657 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243661 | orchestrator | 2025-06-01 23:54:21.243666 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.243670 | orchestrator | Sunday 01 June 2025 23:50:04 +0000 (0:00:00.569) 0:06:49.064 *********** 2025-06-01 23:54:21.243675 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243679 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243684 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243688 | orchestrator | 2025-06-01 23:54:21.243693 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.243697 | orchestrator | Sunday 01 June 2025 23:50:05 +0000 (0:00:00.338) 0:06:49.402 *********** 2025-06-01 23:54:21.243702 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243706 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243710 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243715 | orchestrator | 2025-06-01 23:54:21.243719 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.243724 | orchestrator | Sunday 01 June 2025 23:50:05 +0000 (0:00:00.289) 0:06:49.691 *********** 2025-06-01 23:54:21.243728 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243733 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243737 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243742 | orchestrator | 2025-06-01 23:54:21.243746 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.243751 | orchestrator | Sunday 01 June 2025 23:50:05 +0000 (0:00:00.291) 0:06:49.983 *********** 2025-06-01 23:54:21.243755 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243760 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243764 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243769 | orchestrator | 2025-06-01 23:54:21.243773 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.243778 | orchestrator | Sunday 01 June 2025 23:50:06 +0000 (0:00:00.611) 0:06:50.594 *********** 2025-06-01 23:54:21.243782 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243787 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243791 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243796 | orchestrator | 2025-06-01 23:54:21.243800 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-01 23:54:21.243805 | orchestrator | Sunday 01 June 2025 23:50:06 +0000 (0:00:00.535) 0:06:51.130 *********** 2025-06-01 23:54:21.243809 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243814 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243818 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243822 | orchestrator | 2025-06-01 23:54:21.243827 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-01 23:54:21.243831 | orchestrator | Sunday 01 June 2025 23:50:07 +0000 (0:00:00.332) 0:06:51.462 *********** 2025-06-01 23:54:21.243843 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 23:54:21.243851 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:54:21.243855 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:54:21.243860 | orchestrator | 2025-06-01 23:54:21.243865 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-01 23:54:21.243869 | orchestrator | Sunday 01 June 2025 23:50:08 +0000 (0:00:00.904) 0:06:52.367 *********** 2025-06-01 23:54:21.243873 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.243878 | orchestrator | 2025-06-01 23:54:21.243883 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-01 23:54:21.243887 | orchestrator | Sunday 01 June 2025 23:50:09 +0000 (0:00:00.816) 0:06:53.183 *********** 2025-06-01 23:54:21.243892 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243896 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243901 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243905 | orchestrator | 2025-06-01 23:54:21.243910 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-01 23:54:21.243927 | orchestrator | Sunday 01 June 2025 23:50:09 +0000 (0:00:00.300) 0:06:53.484 *********** 2025-06-01 23:54:21.243932 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.243936 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.243941 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.243946 | orchestrator | 2025-06-01 23:54:21.243950 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-01 23:54:21.243955 | orchestrator | Sunday 01 June 2025 23:50:09 +0000 (0:00:00.309) 0:06:53.793 *********** 2025-06-01 23:54:21.243959 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243964 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243968 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.243973 | orchestrator | 2025-06-01 23:54:21.243978 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-01 23:54:21.243982 | orchestrator | Sunday 01 June 2025 23:50:10 +0000 (0:00:00.888) 0:06:54.682 *********** 2025-06-01 23:54:21.243987 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.243991 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.243996 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.244000 | orchestrator | 2025-06-01 23:54:21.244005 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-01 23:54:21.244009 | orchestrator | Sunday 01 June 2025 23:50:10 +0000 (0:00:00.352) 0:06:55.034 *********** 2025-06-01 23:54:21.244014 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 23:54:21.244019 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 23:54:21.244023 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-01 23:54:21.244028 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 23:54:21.244033 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 23:54:21.244037 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-01 23:54:21.244042 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 23:54:21.244046 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 23:54:21.244051 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-01 23:54:21.244055 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 23:54:21.244066 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 23:54:21.244070 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-01 23:54:21.244075 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 23:54:21.244079 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 23:54:21.244084 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-01 23:54:21.244088 | orchestrator | 2025-06-01 23:54:21.244093 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-01 23:54:21.244097 | orchestrator | Sunday 01 June 2025 23:50:12 +0000 (0:00:01.897) 0:06:56.932 *********** 2025-06-01 23:54:21.244102 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244106 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244111 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244115 | orchestrator | 2025-06-01 23:54:21.244120 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-01 23:54:21.244124 | orchestrator | Sunday 01 June 2025 23:50:13 +0000 (0:00:00.330) 0:06:57.262 *********** 2025-06-01 23:54:21.244129 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.244133 | orchestrator | 2025-06-01 23:54:21.244138 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-01 23:54:21.244142 | orchestrator | Sunday 01 June 2025 23:50:13 +0000 (0:00:00.784) 0:06:58.047 *********** 2025-06-01 23:54:21.244147 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 23:54:21.244152 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 23:54:21.244156 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-01 23:54:21.244164 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-01 23:54:21.244172 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-01 23:54:21.244177 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-01 23:54:21.244181 | orchestrator | 2025-06-01 23:54:21.244186 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-01 23:54:21.244191 | orchestrator | Sunday 01 June 2025 23:50:14 +0000 (0:00:00.928) 0:06:58.976 *********** 2025-06-01 23:54:21.244195 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.244200 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 23:54:21.244204 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:54:21.244209 | orchestrator | 2025-06-01 23:54:21.244213 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-01 23:54:21.244218 | orchestrator | Sunday 01 June 2025 23:50:16 +0000 (0:00:02.110) 0:07:01.086 *********** 2025-06-01 23:54:21.244222 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 23:54:21.244227 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 23:54:21.244232 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.244236 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 23:54:21.244241 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 23:54:21.244245 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.244250 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 23:54:21.244254 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 23:54:21.244259 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.244263 | orchestrator | 2025-06-01 23:54:21.244268 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-01 23:54:21.244272 | orchestrator | Sunday 01 June 2025 23:50:18 +0000 (0:00:01.352) 0:07:02.438 *********** 2025-06-01 23:54:21.244277 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.244286 | orchestrator | 2025-06-01 23:54:21.244290 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-01 23:54:21.244295 | orchestrator | Sunday 01 June 2025 23:50:20 +0000 (0:00:01.978) 0:07:04.417 *********** 2025-06-01 23:54:21.244299 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.244304 | orchestrator | 2025-06-01 23:54:21.244309 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-01 23:54:21.244313 | orchestrator | Sunday 01 June 2025 23:50:20 +0000 (0:00:00.517) 0:07:04.934 *********** 2025-06-01 23:54:21.244318 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-94e6c78b-35f7-5cb8-865b-5befb7b6694e', 'data_vg': 'ceph-94e6c78b-35f7-5cb8-865b-5befb7b6694e'}) 2025-06-01 23:54:21.244323 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e43a5796-5555-5d7b-8188-8712d414b3d1', 'data_vg': 'ceph-e43a5796-5555-5d7b-8188-8712d414b3d1'}) 2025-06-01 23:54:21.244328 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-008ba5ef-cc9a-56f9-b375-6638a5870e2c', 'data_vg': 'ceph-008ba5ef-cc9a-56f9-b375-6638a5870e2c'}) 2025-06-01 23:54:21.244332 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-0de39833-f6ff-5bf1-9ca3-735e32822edb', 'data_vg': 'ceph-0de39833-f6ff-5bf1-9ca3-735e32822edb'}) 2025-06-01 23:54:21.244337 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af', 'data_vg': 'ceph-3aa9cf12-e8a4-5f15-a0dc-00261f7d28af'}) 2025-06-01 23:54:21.244341 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-21b07b94-4d11-536c-9a45-349f1f6df87d', 'data_vg': 'ceph-21b07b94-4d11-536c-9a45-349f1f6df87d'}) 2025-06-01 23:54:21.244346 | orchestrator | 2025-06-01 23:54:21.244350 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-01 23:54:21.244355 | orchestrator | Sunday 01 June 2025 23:51:03 +0000 (0:00:42.508) 0:07:47.442 *********** 2025-06-01 23:54:21.244360 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244364 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244369 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244373 | orchestrator | 2025-06-01 23:54:21.244378 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-01 23:54:21.244382 | orchestrator | Sunday 01 June 2025 23:51:03 +0000 (0:00:00.555) 0:07:47.998 *********** 2025-06-01 23:54:21.244387 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.244391 | orchestrator | 2025-06-01 23:54:21.244396 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-01 23:54:21.244400 | orchestrator | Sunday 01 June 2025 23:51:04 +0000 (0:00:00.512) 0:07:48.510 *********** 2025-06-01 23:54:21.244405 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.244409 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.244414 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.244419 | orchestrator | 2025-06-01 23:54:21.244423 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-01 23:54:21.244428 | orchestrator | Sunday 01 June 2025 23:51:05 +0000 (0:00:00.654) 0:07:49.165 *********** 2025-06-01 23:54:21.244432 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.244437 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.244441 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.244446 | orchestrator | 2025-06-01 23:54:21.244450 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-01 23:54:21.244455 | orchestrator | Sunday 01 June 2025 23:51:07 +0000 (0:00:02.741) 0:07:51.907 *********** 2025-06-01 23:54:21.244463 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.244468 | orchestrator | 2025-06-01 23:54:21.244475 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-01 23:54:21.244480 | orchestrator | Sunday 01 June 2025 23:51:08 +0000 (0:00:00.528) 0:07:52.436 *********** 2025-06-01 23:54:21.244488 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.244493 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.244497 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.244502 | orchestrator | 2025-06-01 23:54:21.244506 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-01 23:54:21.244511 | orchestrator | Sunday 01 June 2025 23:51:09 +0000 (0:00:01.105) 0:07:53.542 *********** 2025-06-01 23:54:21.244515 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.244520 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.244524 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.244529 | orchestrator | 2025-06-01 23:54:21.244533 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-01 23:54:21.244538 | orchestrator | Sunday 01 June 2025 23:51:10 +0000 (0:00:01.452) 0:07:54.995 *********** 2025-06-01 23:54:21.244543 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.244547 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.244552 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.244556 | orchestrator | 2025-06-01 23:54:21.244561 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-01 23:54:21.244565 | orchestrator | Sunday 01 June 2025 23:51:12 +0000 (0:00:01.843) 0:07:56.838 *********** 2025-06-01 23:54:21.244570 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244574 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244579 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244583 | orchestrator | 2025-06-01 23:54:21.244588 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-01 23:54:21.244593 | orchestrator | Sunday 01 June 2025 23:51:12 +0000 (0:00:00.304) 0:07:57.143 *********** 2025-06-01 23:54:21.244597 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244602 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244606 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244611 | orchestrator | 2025-06-01 23:54:21.244615 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-01 23:54:21.244620 | orchestrator | Sunday 01 June 2025 23:51:13 +0000 (0:00:00.332) 0:07:57.476 *********** 2025-06-01 23:54:21.244625 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 23:54:21.244629 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-06-01 23:54:21.244634 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-06-01 23:54:21.244638 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-01 23:54:21.244643 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-06-01 23:54:21.244647 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-06-01 23:54:21.244652 | orchestrator | 2025-06-01 23:54:21.244656 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-01 23:54:21.244661 | orchestrator | Sunday 01 June 2025 23:51:14 +0000 (0:00:01.241) 0:07:58.718 *********** 2025-06-01 23:54:21.244666 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-01 23:54:21.244670 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-06-01 23:54:21.244675 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-01 23:54:21.244679 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-01 23:54:21.244684 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-01 23:54:21.244688 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-01 23:54:21.244693 | orchestrator | 2025-06-01 23:54:21.244697 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-01 23:54:21.244702 | orchestrator | Sunday 01 June 2025 23:51:16 +0000 (0:00:02.095) 0:08:00.813 *********** 2025-06-01 23:54:21.244706 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-01 23:54:21.244711 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-06-01 23:54:21.244715 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-01 23:54:21.244720 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-01 23:54:21.244724 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-06-01 23:54:21.244732 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-01 23:54:21.244737 | orchestrator | 2025-06-01 23:54:21.244741 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-01 23:54:21.244746 | orchestrator | Sunday 01 June 2025 23:51:20 +0000 (0:00:03.626) 0:08:04.439 *********** 2025-06-01 23:54:21.244750 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244755 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244760 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.244764 | orchestrator | 2025-06-01 23:54:21.244769 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-01 23:54:21.244773 | orchestrator | Sunday 01 June 2025 23:51:22 +0000 (0:00:02.369) 0:08:06.809 *********** 2025-06-01 23:54:21.244778 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244782 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244787 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-01 23:54:21.244792 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.244796 | orchestrator | 2025-06-01 23:54:21.244801 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-01 23:54:21.244805 | orchestrator | Sunday 01 June 2025 23:51:35 +0000 (0:00:12.915) 0:08:19.725 *********** 2025-06-01 23:54:21.244810 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244814 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244819 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244823 | orchestrator | 2025-06-01 23:54:21.244828 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 23:54:21.244833 | orchestrator | Sunday 01 June 2025 23:51:36 +0000 (0:00:00.830) 0:08:20.555 *********** 2025-06-01 23:54:21.244837 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244842 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244846 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244851 | orchestrator | 2025-06-01 23:54:21.244862 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-01 23:54:21.244873 | orchestrator | Sunday 01 June 2025 23:51:37 +0000 (0:00:00.620) 0:08:21.176 *********** 2025-06-01 23:54:21.244881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.244888 | orchestrator | 2025-06-01 23:54:21.244896 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-01 23:54:21.244903 | orchestrator | Sunday 01 June 2025 23:51:37 +0000 (0:00:00.555) 0:08:21.731 *********** 2025-06-01 23:54:21.244910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.244948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.244953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.244957 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244962 | orchestrator | 2025-06-01 23:54:21.244966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-01 23:54:21.244971 | orchestrator | Sunday 01 June 2025 23:51:37 +0000 (0:00:00.385) 0:08:22.116 *********** 2025-06-01 23:54:21.244975 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.244979 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.244984 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.244988 | orchestrator | 2025-06-01 23:54:21.244993 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-01 23:54:21.244997 | orchestrator | Sunday 01 June 2025 23:51:38 +0000 (0:00:00.354) 0:08:22.471 *********** 2025-06-01 23:54:21.245002 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245006 | orchestrator | 2025-06-01 23:54:21.245011 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-01 23:54:21.245015 | orchestrator | Sunday 01 June 2025 23:51:38 +0000 (0:00:00.204) 0:08:22.675 *********** 2025-06-01 23:54:21.245023 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245028 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245032 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245037 | orchestrator | 2025-06-01 23:54:21.245042 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-01 23:54:21.245047 | orchestrator | Sunday 01 June 2025 23:51:39 +0000 (0:00:00.564) 0:08:23.240 *********** 2025-06-01 23:54:21.245051 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245055 | orchestrator | 2025-06-01 23:54:21.245059 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-01 23:54:21.245063 | orchestrator | Sunday 01 June 2025 23:51:39 +0000 (0:00:00.218) 0:08:23.459 *********** 2025-06-01 23:54:21.245067 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245071 | orchestrator | 2025-06-01 23:54:21.245075 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-01 23:54:21.245079 | orchestrator | Sunday 01 June 2025 23:51:39 +0000 (0:00:00.221) 0:08:23.680 *********** 2025-06-01 23:54:21.245083 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245087 | orchestrator | 2025-06-01 23:54:21.245091 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-01 23:54:21.245095 | orchestrator | Sunday 01 June 2025 23:51:39 +0000 (0:00:00.129) 0:08:23.810 *********** 2025-06-01 23:54:21.245099 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245103 | orchestrator | 2025-06-01 23:54:21.245107 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-01 23:54:21.245111 | orchestrator | Sunday 01 June 2025 23:51:39 +0000 (0:00:00.236) 0:08:24.046 *********** 2025-06-01 23:54:21.245115 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245119 | orchestrator | 2025-06-01 23:54:21.245123 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-01 23:54:21.245127 | orchestrator | Sunday 01 June 2025 23:51:40 +0000 (0:00:00.227) 0:08:24.274 *********** 2025-06-01 23:54:21.245131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.245136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.245140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.245144 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245148 | orchestrator | 2025-06-01 23:54:21.245152 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-01 23:54:21.245156 | orchestrator | Sunday 01 June 2025 23:51:40 +0000 (0:00:00.400) 0:08:24.675 *********** 2025-06-01 23:54:21.245160 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245164 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245168 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245172 | orchestrator | 2025-06-01 23:54:21.245176 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-01 23:54:21.245180 | orchestrator | Sunday 01 June 2025 23:51:40 +0000 (0:00:00.281) 0:08:24.957 *********** 2025-06-01 23:54:21.245184 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245188 | orchestrator | 2025-06-01 23:54:21.245192 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-01 23:54:21.245196 | orchestrator | Sunday 01 June 2025 23:51:41 +0000 (0:00:00.793) 0:08:25.750 *********** 2025-06-01 23:54:21.245200 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245204 | orchestrator | 2025-06-01 23:54:21.245208 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-01 23:54:21.245212 | orchestrator | 2025-06-01 23:54:21.245216 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.245220 | orchestrator | Sunday 01 June 2025 23:51:42 +0000 (0:00:00.635) 0:08:26.385 *********** 2025-06-01 23:54:21.245225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.245229 | orchestrator | 2025-06-01 23:54:21.245237 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.245241 | orchestrator | Sunday 01 June 2025 23:51:43 +0000 (0:00:01.221) 0:08:27.607 *********** 2025-06-01 23:54:21.245251 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.245256 | orchestrator | 2025-06-01 23:54:21.245260 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.245264 | orchestrator | Sunday 01 June 2025 23:51:44 +0000 (0:00:01.272) 0:08:28.879 *********** 2025-06-01 23:54:21.245268 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245272 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245276 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.245280 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.245284 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.245288 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245292 | orchestrator | 2025-06-01 23:54:21.245296 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.245301 | orchestrator | Sunday 01 June 2025 23:51:45 +0000 (0:00:00.869) 0:08:29.749 *********** 2025-06-01 23:54:21.245305 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245309 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245313 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245317 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245321 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245325 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245329 | orchestrator | 2025-06-01 23:54:21.245333 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.245337 | orchestrator | Sunday 01 June 2025 23:51:46 +0000 (0:00:01.063) 0:08:30.812 *********** 2025-06-01 23:54:21.245341 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245345 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245349 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245353 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245357 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245361 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245365 | orchestrator | 2025-06-01 23:54:21.245369 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.245373 | orchestrator | Sunday 01 June 2025 23:51:47 +0000 (0:00:01.327) 0:08:32.139 *********** 2025-06-01 23:54:21.245377 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245382 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245385 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245390 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245394 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245398 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245402 | orchestrator | 2025-06-01 23:54:21.245406 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.245410 | orchestrator | Sunday 01 June 2025 23:51:48 +0000 (0:00:00.992) 0:08:33.132 *********** 2025-06-01 23:54:21.245414 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245418 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.245422 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245426 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.245430 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.245434 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245439 | orchestrator | 2025-06-01 23:54:21.245443 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.245447 | orchestrator | Sunday 01 June 2025 23:51:49 +0000 (0:00:00.867) 0:08:34.000 *********** 2025-06-01 23:54:21.245451 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245455 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245459 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245463 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245565 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245569 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245573 | orchestrator | 2025-06-01 23:54:21.245577 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.245581 | orchestrator | Sunday 01 June 2025 23:51:50 +0000 (0:00:00.600) 0:08:34.600 *********** 2025-06-01 23:54:21.245585 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245589 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245593 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245597 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245601 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245605 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245609 | orchestrator | 2025-06-01 23:54:21.245613 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.245618 | orchestrator | Sunday 01 June 2025 23:51:51 +0000 (0:00:00.812) 0:08:35.413 *********** 2025-06-01 23:54:21.245622 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.245626 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.245630 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.245634 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245638 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245642 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245646 | orchestrator | 2025-06-01 23:54:21.245650 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.245654 | orchestrator | Sunday 01 June 2025 23:51:52 +0000 (0:00:00.993) 0:08:36.407 *********** 2025-06-01 23:54:21.245658 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.245662 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.245666 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.245670 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245674 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245678 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245682 | orchestrator | 2025-06-01 23:54:21.245686 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.245690 | orchestrator | Sunday 01 June 2025 23:51:53 +0000 (0:00:01.192) 0:08:37.600 *********** 2025-06-01 23:54:21.245694 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245698 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245702 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245706 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245710 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245715 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245719 | orchestrator | 2025-06-01 23:54:21.245723 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.245727 | orchestrator | Sunday 01 June 2025 23:51:54 +0000 (0:00:00.614) 0:08:38.214 *********** 2025-06-01 23:54:21.245731 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.245737 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.245741 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.245748 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245752 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245756 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245760 | orchestrator | 2025-06-01 23:54:21.245765 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.245769 | orchestrator | Sunday 01 June 2025 23:51:54 +0000 (0:00:00.788) 0:08:39.002 *********** 2025-06-01 23:54:21.245773 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245777 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245781 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245785 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245789 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245793 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245797 | orchestrator | 2025-06-01 23:54:21.245801 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.245808 | orchestrator | Sunday 01 June 2025 23:51:55 +0000 (0:00:00.673) 0:08:39.676 *********** 2025-06-01 23:54:21.245812 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245816 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245820 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245824 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245828 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245832 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245836 | orchestrator | 2025-06-01 23:54:21.245840 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.245844 | orchestrator | Sunday 01 June 2025 23:51:56 +0000 (0:00:00.830) 0:08:40.506 *********** 2025-06-01 23:54:21.245849 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245853 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245857 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245861 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.245865 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.245869 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.245873 | orchestrator | 2025-06-01 23:54:21.245877 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.245881 | orchestrator | Sunday 01 June 2025 23:51:56 +0000 (0:00:00.625) 0:08:41.132 *********** 2025-06-01 23:54:21.245885 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245889 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245893 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245897 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245901 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245905 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245909 | orchestrator | 2025-06-01 23:54:21.245929 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.245933 | orchestrator | Sunday 01 June 2025 23:51:57 +0000 (0:00:00.836) 0:08:41.968 *********** 2025-06-01 23:54:21.245937 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:54:21.245941 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:54:21.245945 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:54:21.245949 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245953 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245957 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245961 | orchestrator | 2025-06-01 23:54:21.245966 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.245970 | orchestrator | Sunday 01 June 2025 23:51:58 +0000 (0:00:00.581) 0:08:42.550 *********** 2025-06-01 23:54:21.245974 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.245978 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.245982 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.245986 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.245990 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.245994 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.245998 | orchestrator | 2025-06-01 23:54:21.246002 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.246006 | orchestrator | Sunday 01 June 2025 23:51:59 +0000 (0:00:00.807) 0:08:43.357 *********** 2025-06-01 23:54:21.246010 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.246034 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.246039 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.246043 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246047 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246051 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246055 | orchestrator | 2025-06-01 23:54:21.246060 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.246064 | orchestrator | Sunday 01 June 2025 23:51:59 +0000 (0:00:00.598) 0:08:43.956 *********** 2025-06-01 23:54:21.246068 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.246072 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.246076 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.246083 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246087 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246092 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246096 | orchestrator | 2025-06-01 23:54:21.246100 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-01 23:54:21.246104 | orchestrator | Sunday 01 June 2025 23:52:01 +0000 (0:00:01.260) 0:08:45.217 *********** 2025-06-01 23:54:21.246108 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.246112 | orchestrator | 2025-06-01 23:54:21.246116 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-01 23:54:21.246120 | orchestrator | Sunday 01 June 2025 23:52:05 +0000 (0:00:04.005) 0:08:49.222 *********** 2025-06-01 23:54:21.246124 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.246128 | orchestrator | 2025-06-01 23:54:21.246133 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-01 23:54:21.246137 | orchestrator | Sunday 01 June 2025 23:52:07 +0000 (0:00:02.085) 0:08:51.307 *********** 2025-06-01 23:54:21.246141 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.246145 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.246149 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.246153 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.246157 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.246161 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.246165 | orchestrator | 2025-06-01 23:54:21.246169 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-01 23:54:21.246173 | orchestrator | Sunday 01 June 2025 23:52:08 +0000 (0:00:01.707) 0:08:53.015 *********** 2025-06-01 23:54:21.246180 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.246184 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.246191 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.246196 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.246200 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.246204 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.246208 | orchestrator | 2025-06-01 23:54:21.246212 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-01 23:54:21.246216 | orchestrator | Sunday 01 June 2025 23:52:09 +0000 (0:00:01.049) 0:08:54.064 *********** 2025-06-01 23:54:21.246220 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.246225 | orchestrator | 2025-06-01 23:54:21.246229 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-01 23:54:21.246233 | orchestrator | Sunday 01 June 2025 23:52:11 +0000 (0:00:01.232) 0:08:55.297 *********** 2025-06-01 23:54:21.246237 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.246241 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.246245 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.246249 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.246253 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.246257 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.246261 | orchestrator | 2025-06-01 23:54:21.246265 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-01 23:54:21.246270 | orchestrator | Sunday 01 June 2025 23:52:12 +0000 (0:00:01.646) 0:08:56.943 *********** 2025-06-01 23:54:21.246274 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.246278 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.246282 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.246286 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.246290 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.246294 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.246298 | orchestrator | 2025-06-01 23:54:21.246302 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-01 23:54:21.246306 | orchestrator | Sunday 01 June 2025 23:52:16 +0000 (0:00:03.238) 0:09:00.181 *********** 2025-06-01 23:54:21.246316 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.246320 | orchestrator | 2025-06-01 23:54:21.246324 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-01 23:54:21.246328 | orchestrator | Sunday 01 June 2025 23:52:17 +0000 (0:00:01.310) 0:09:01.492 *********** 2025-06-01 23:54:21.246332 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.246336 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.246340 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.246344 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246348 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246352 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246356 | orchestrator | 2025-06-01 23:54:21.246361 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-01 23:54:21.246365 | orchestrator | Sunday 01 June 2025 23:52:18 +0000 (0:00:00.810) 0:09:02.302 *********** 2025-06-01 23:54:21.246369 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:54:21.246373 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:54:21.246377 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:54:21.246381 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.246385 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.246389 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.246393 | orchestrator | 2025-06-01 23:54:21.246397 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-01 23:54:21.246401 | orchestrator | Sunday 01 June 2025 23:52:20 +0000 (0:00:02.148) 0:09:04.450 *********** 2025-06-01 23:54:21.246405 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:54:21.246409 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:54:21.246413 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:54:21.246417 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246421 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246425 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246429 | orchestrator | 2025-06-01 23:54:21.246434 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-01 23:54:21.246438 | orchestrator | 2025-06-01 23:54:21.246442 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.246446 | orchestrator | Sunday 01 June 2025 23:52:21 +0000 (0:00:01.186) 0:09:05.637 *********** 2025-06-01 23:54:21.246450 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.246454 | orchestrator | 2025-06-01 23:54:21.246458 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.246463 | orchestrator | Sunday 01 June 2025 23:52:21 +0000 (0:00:00.494) 0:09:06.131 *********** 2025-06-01 23:54:21.246467 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.246471 | orchestrator | 2025-06-01 23:54:21.246475 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.246479 | orchestrator | Sunday 01 June 2025 23:52:22 +0000 (0:00:00.792) 0:09:06.923 *********** 2025-06-01 23:54:21.246483 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246487 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246491 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246495 | orchestrator | 2025-06-01 23:54:21.246499 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.246503 | orchestrator | Sunday 01 June 2025 23:52:23 +0000 (0:00:00.293) 0:09:07.217 *********** 2025-06-01 23:54:21.246507 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246511 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246515 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246519 | orchestrator | 2025-06-01 23:54:21.246524 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.246534 | orchestrator | Sunday 01 June 2025 23:52:23 +0000 (0:00:00.690) 0:09:07.908 *********** 2025-06-01 23:54:21.246539 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246546 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246550 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246554 | orchestrator | 2025-06-01 23:54:21.246558 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.246563 | orchestrator | Sunday 01 June 2025 23:52:24 +0000 (0:00:01.077) 0:09:08.985 *********** 2025-06-01 23:54:21.246567 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246571 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246575 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246579 | orchestrator | 2025-06-01 23:54:21.246583 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.246587 | orchestrator | Sunday 01 June 2025 23:52:25 +0000 (0:00:00.769) 0:09:09.755 *********** 2025-06-01 23:54:21.246591 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246595 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246599 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246604 | orchestrator | 2025-06-01 23:54:21.246608 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.246612 | orchestrator | Sunday 01 June 2025 23:52:25 +0000 (0:00:00.322) 0:09:10.077 *********** 2025-06-01 23:54:21.246616 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246620 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246624 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246628 | orchestrator | 2025-06-01 23:54:21.246632 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.246636 | orchestrator | Sunday 01 June 2025 23:52:26 +0000 (0:00:00.303) 0:09:10.381 *********** 2025-06-01 23:54:21.246640 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246644 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246648 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246652 | orchestrator | 2025-06-01 23:54:21.246656 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.246661 | orchestrator | Sunday 01 June 2025 23:52:26 +0000 (0:00:00.575) 0:09:10.957 *********** 2025-06-01 23:54:21.246665 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246669 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246673 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246677 | orchestrator | 2025-06-01 23:54:21.246681 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.246685 | orchestrator | Sunday 01 June 2025 23:52:27 +0000 (0:00:00.701) 0:09:11.658 *********** 2025-06-01 23:54:21.246689 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246693 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246697 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246701 | orchestrator | 2025-06-01 23:54:21.246705 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.246709 | orchestrator | Sunday 01 June 2025 23:52:28 +0000 (0:00:00.737) 0:09:12.396 *********** 2025-06-01 23:54:21.246713 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246718 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246722 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246726 | orchestrator | 2025-06-01 23:54:21.246730 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.246734 | orchestrator | Sunday 01 June 2025 23:52:28 +0000 (0:00:00.280) 0:09:12.676 *********** 2025-06-01 23:54:21.246738 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246742 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246746 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246750 | orchestrator | 2025-06-01 23:54:21.246754 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.246758 | orchestrator | Sunday 01 June 2025 23:52:29 +0000 (0:00:00.587) 0:09:13.264 *********** 2025-06-01 23:54:21.246767 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246771 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246775 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246779 | orchestrator | 2025-06-01 23:54:21.246783 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.246788 | orchestrator | Sunday 01 June 2025 23:52:29 +0000 (0:00:00.352) 0:09:13.617 *********** 2025-06-01 23:54:21.246792 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246796 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246800 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246804 | orchestrator | 2025-06-01 23:54:21.246808 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.246812 | orchestrator | Sunday 01 June 2025 23:52:29 +0000 (0:00:00.312) 0:09:13.929 *********** 2025-06-01 23:54:21.246816 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246820 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246824 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246828 | orchestrator | 2025-06-01 23:54:21.246832 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.246836 | orchestrator | Sunday 01 June 2025 23:52:30 +0000 (0:00:00.307) 0:09:14.237 *********** 2025-06-01 23:54:21.246840 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246844 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246849 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246853 | orchestrator | 2025-06-01 23:54:21.246857 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.246861 | orchestrator | Sunday 01 June 2025 23:52:30 +0000 (0:00:00.623) 0:09:14.861 *********** 2025-06-01 23:54:21.246865 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246869 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246873 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246877 | orchestrator | 2025-06-01 23:54:21.246881 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.246885 | orchestrator | Sunday 01 June 2025 23:52:31 +0000 (0:00:00.336) 0:09:15.198 *********** 2025-06-01 23:54:21.246889 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.246893 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246897 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246901 | orchestrator | 2025-06-01 23:54:21.246906 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.246910 | orchestrator | Sunday 01 June 2025 23:52:31 +0000 (0:00:00.305) 0:09:15.503 *********** 2025-06-01 23:54:21.246928 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246934 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246938 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246942 | orchestrator | 2025-06-01 23:54:21.246949 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.246954 | orchestrator | Sunday 01 June 2025 23:52:31 +0000 (0:00:00.337) 0:09:15.841 *********** 2025-06-01 23:54:21.246958 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.246962 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.246966 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.246970 | orchestrator | 2025-06-01 23:54:21.246974 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-01 23:54:21.246978 | orchestrator | Sunday 01 June 2025 23:52:32 +0000 (0:00:00.838) 0:09:16.680 *********** 2025-06-01 23:54:21.246982 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.246986 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.246991 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-01 23:54:21.246995 | orchestrator | 2025-06-01 23:54:21.246999 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-01 23:54:21.247003 | orchestrator | Sunday 01 June 2025 23:52:32 +0000 (0:00:00.392) 0:09:17.072 *********** 2025-06-01 23:54:21.247007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.247018 | orchestrator | 2025-06-01 23:54:21.247022 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-01 23:54:21.247027 | orchestrator | Sunday 01 June 2025 23:52:35 +0000 (0:00:02.112) 0:09:19.185 *********** 2025-06-01 23:54:21.247032 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-01 23:54:21.247037 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247041 | orchestrator | 2025-06-01 23:54:21.247045 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-01 23:54:21.247049 | orchestrator | Sunday 01 June 2025 23:52:35 +0000 (0:00:00.230) 0:09:19.415 *********** 2025-06-01 23:54:21.247054 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:54:21.247061 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:54:21.247065 | orchestrator | 2025-06-01 23:54:21.247069 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-01 23:54:21.247073 | orchestrator | Sunday 01 June 2025 23:52:43 +0000 (0:00:08.483) 0:09:27.899 *********** 2025-06-01 23:54:21.247077 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-01 23:54:21.247081 | orchestrator | 2025-06-01 23:54:21.247086 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-01 23:54:21.247090 | orchestrator | Sunday 01 June 2025 23:52:47 +0000 (0:00:03.493) 0:09:31.393 *********** 2025-06-01 23:54:21.247094 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.247098 | orchestrator | 2025-06-01 23:54:21.247102 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-01 23:54:21.247106 | orchestrator | Sunday 01 June 2025 23:52:47 +0000 (0:00:00.555) 0:09:31.949 *********** 2025-06-01 23:54:21.247110 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 23:54:21.247114 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 23:54:21.247118 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-01 23:54:21.247122 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-01 23:54:21.247126 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-01 23:54:21.247131 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-01 23:54:21.247135 | orchestrator | 2025-06-01 23:54:21.247139 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-01 23:54:21.247143 | orchestrator | Sunday 01 June 2025 23:52:48 +0000 (0:00:00.997) 0:09:32.946 *********** 2025-06-01 23:54:21.247147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.247151 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 23:54:21.247155 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:54:21.247159 | orchestrator | 2025-06-01 23:54:21.247163 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-01 23:54:21.247167 | orchestrator | Sunday 01 June 2025 23:52:51 +0000 (0:00:02.473) 0:09:35.420 *********** 2025-06-01 23:54:21.247171 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 23:54:21.247176 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 23:54:21.247180 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247187 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 23:54:21.247192 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 23:54:21.247196 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247200 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 23:54:21.247204 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 23:54:21.247210 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247215 | orchestrator | 2025-06-01 23:54:21.247219 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-01 23:54:21.247223 | orchestrator | Sunday 01 June 2025 23:52:53 +0000 (0:00:01.867) 0:09:37.287 *********** 2025-06-01 23:54:21.247227 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247231 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247235 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247239 | orchestrator | 2025-06-01 23:54:21.247243 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-01 23:54:21.247248 | orchestrator | Sunday 01 June 2025 23:52:55 +0000 (0:00:02.704) 0:09:39.992 *********** 2025-06-01 23:54:21.247252 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247256 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.247260 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.247264 | orchestrator | 2025-06-01 23:54:21.247324 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-01 23:54:21.247334 | orchestrator | Sunday 01 June 2025 23:52:56 +0000 (0:00:00.297) 0:09:40.289 *********** 2025-06-01 23:54:21.247338 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.247342 | orchestrator | 2025-06-01 23:54:21.247346 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-01 23:54:21.247350 | orchestrator | Sunday 01 June 2025 23:52:56 +0000 (0:00:00.771) 0:09:41.061 *********** 2025-06-01 23:54:21.247355 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.247359 | orchestrator | 2025-06-01 23:54:21.247363 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-01 23:54:21.247367 | orchestrator | Sunday 01 June 2025 23:52:57 +0000 (0:00:00.467) 0:09:41.528 *********** 2025-06-01 23:54:21.247371 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247375 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247379 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247383 | orchestrator | 2025-06-01 23:54:21.247387 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-01 23:54:21.247392 | orchestrator | Sunday 01 June 2025 23:52:58 +0000 (0:00:01.268) 0:09:42.797 *********** 2025-06-01 23:54:21.247396 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247400 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247404 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247408 | orchestrator | 2025-06-01 23:54:21.247412 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-01 23:54:21.247416 | orchestrator | Sunday 01 June 2025 23:53:00 +0000 (0:00:01.447) 0:09:44.245 *********** 2025-06-01 23:54:21.247420 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247424 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247428 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247432 | orchestrator | 2025-06-01 23:54:21.247436 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-01 23:54:21.247440 | orchestrator | Sunday 01 June 2025 23:53:01 +0000 (0:00:01.863) 0:09:46.109 *********** 2025-06-01 23:54:21.247444 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247448 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247452 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247456 | orchestrator | 2025-06-01 23:54:21.247461 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-01 23:54:21.247468 | orchestrator | Sunday 01 June 2025 23:53:04 +0000 (0:00:02.095) 0:09:48.204 *********** 2025-06-01 23:54:21.247472 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247476 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247480 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247484 | orchestrator | 2025-06-01 23:54:21.247489 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 23:54:21.247493 | orchestrator | Sunday 01 June 2025 23:53:05 +0000 (0:00:01.748) 0:09:49.953 *********** 2025-06-01 23:54:21.247497 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247501 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247505 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247509 | orchestrator | 2025-06-01 23:54:21.247513 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-01 23:54:21.247517 | orchestrator | Sunday 01 June 2025 23:53:06 +0000 (0:00:00.673) 0:09:50.627 *********** 2025-06-01 23:54:21.247521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.247525 | orchestrator | 2025-06-01 23:54:21.247529 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-01 23:54:21.247533 | orchestrator | Sunday 01 June 2025 23:53:07 +0000 (0:00:00.764) 0:09:51.392 *********** 2025-06-01 23:54:21.247538 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247542 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247546 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247550 | orchestrator | 2025-06-01 23:54:21.247554 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-01 23:54:21.247558 | orchestrator | Sunday 01 June 2025 23:53:07 +0000 (0:00:00.320) 0:09:51.712 *********** 2025-06-01 23:54:21.247562 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.247566 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.247570 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.247574 | orchestrator | 2025-06-01 23:54:21.247578 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-01 23:54:21.247582 | orchestrator | Sunday 01 June 2025 23:53:08 +0000 (0:00:01.201) 0:09:52.913 *********** 2025-06-01 23:54:21.247586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.247590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.247595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.247599 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247603 | orchestrator | 2025-06-01 23:54:21.247607 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-01 23:54:21.247614 | orchestrator | Sunday 01 June 2025 23:53:09 +0000 (0:00:00.859) 0:09:53.773 *********** 2025-06-01 23:54:21.247621 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247626 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247630 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247634 | orchestrator | 2025-06-01 23:54:21.247638 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-01 23:54:21.247642 | orchestrator | 2025-06-01 23:54:21.247646 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-01 23:54:21.247650 | orchestrator | Sunday 01 June 2025 23:53:10 +0000 (0:00:00.796) 0:09:54.569 *********** 2025-06-01 23:54:21.247655 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.247659 | orchestrator | 2025-06-01 23:54:21.247663 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-01 23:54:21.247667 | orchestrator | Sunday 01 June 2025 23:53:10 +0000 (0:00:00.503) 0:09:55.073 *********** 2025-06-01 23:54:21.247671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.247675 | orchestrator | 2025-06-01 23:54:21.247679 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-01 23:54:21.247687 | orchestrator | Sunday 01 June 2025 23:53:11 +0000 (0:00:00.780) 0:09:55.854 *********** 2025-06-01 23:54:21.247691 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247695 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.247699 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.247703 | orchestrator | 2025-06-01 23:54:21.247708 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-01 23:54:21.247712 | orchestrator | Sunday 01 June 2025 23:53:12 +0000 (0:00:00.325) 0:09:56.179 *********** 2025-06-01 23:54:21.247716 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247720 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247724 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247728 | orchestrator | 2025-06-01 23:54:21.247732 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-01 23:54:21.247736 | orchestrator | Sunday 01 June 2025 23:53:12 +0000 (0:00:00.691) 0:09:56.870 *********** 2025-06-01 23:54:21.247741 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247745 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247749 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247753 | orchestrator | 2025-06-01 23:54:21.247757 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-01 23:54:21.247761 | orchestrator | Sunday 01 June 2025 23:53:13 +0000 (0:00:00.658) 0:09:57.529 *********** 2025-06-01 23:54:21.247765 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247769 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247773 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247777 | orchestrator | 2025-06-01 23:54:21.247781 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-01 23:54:21.247786 | orchestrator | Sunday 01 June 2025 23:53:14 +0000 (0:00:00.990) 0:09:58.519 *********** 2025-06-01 23:54:21.247790 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247794 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.247798 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.247802 | orchestrator | 2025-06-01 23:54:21.247806 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-01 23:54:21.247810 | orchestrator | Sunday 01 June 2025 23:53:14 +0000 (0:00:00.331) 0:09:58.850 *********** 2025-06-01 23:54:21.247814 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247818 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.247824 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.247831 | orchestrator | 2025-06-01 23:54:21.247838 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-01 23:54:21.247844 | orchestrator | Sunday 01 June 2025 23:53:15 +0000 (0:00:00.319) 0:09:59.169 *********** 2025-06-01 23:54:21.247851 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247858 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.247865 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.247873 | orchestrator | 2025-06-01 23:54:21.247880 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-01 23:54:21.247885 | orchestrator | Sunday 01 June 2025 23:53:15 +0000 (0:00:00.323) 0:09:59.493 *********** 2025-06-01 23:54:21.247889 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247893 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247897 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247901 | orchestrator | 2025-06-01 23:54:21.247905 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-01 23:54:21.247909 | orchestrator | Sunday 01 June 2025 23:53:16 +0000 (0:00:01.032) 0:10:00.525 *********** 2025-06-01 23:54:21.247940 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.247945 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.247949 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.247953 | orchestrator | 2025-06-01 23:54:21.247957 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-01 23:54:21.247961 | orchestrator | Sunday 01 June 2025 23:53:17 +0000 (0:00:00.748) 0:10:01.274 *********** 2025-06-01 23:54:21.247969 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247974 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.247978 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.247982 | orchestrator | 2025-06-01 23:54:21.247986 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-01 23:54:21.247990 | orchestrator | Sunday 01 June 2025 23:53:17 +0000 (0:00:00.296) 0:10:01.570 *********** 2025-06-01 23:54:21.247994 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.247998 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248002 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248006 | orchestrator | 2025-06-01 23:54:21.248011 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-01 23:54:21.248015 | orchestrator | Sunday 01 June 2025 23:53:17 +0000 (0:00:00.303) 0:10:01.874 *********** 2025-06-01 23:54:21.248019 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248023 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248027 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248031 | orchestrator | 2025-06-01 23:54:21.248038 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-01 23:54:21.248045 | orchestrator | Sunday 01 June 2025 23:53:18 +0000 (0:00:00.607) 0:10:02.481 *********** 2025-06-01 23:54:21.248049 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248054 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248058 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248062 | orchestrator | 2025-06-01 23:54:21.248066 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-01 23:54:21.248070 | orchestrator | Sunday 01 June 2025 23:53:18 +0000 (0:00:00.329) 0:10:02.810 *********** 2025-06-01 23:54:21.248074 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248078 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248082 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248086 | orchestrator | 2025-06-01 23:54:21.248090 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-01 23:54:21.248094 | orchestrator | Sunday 01 June 2025 23:53:18 +0000 (0:00:00.310) 0:10:03.121 *********** 2025-06-01 23:54:21.248099 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248103 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248107 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248111 | orchestrator | 2025-06-01 23:54:21.248115 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-01 23:54:21.248119 | orchestrator | Sunday 01 June 2025 23:53:19 +0000 (0:00:00.297) 0:10:03.418 *********** 2025-06-01 23:54:21.248123 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248127 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248131 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248136 | orchestrator | 2025-06-01 23:54:21.248140 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-01 23:54:21.248144 | orchestrator | Sunday 01 June 2025 23:53:19 +0000 (0:00:00.550) 0:10:03.969 *********** 2025-06-01 23:54:21.248148 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248152 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248156 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248160 | orchestrator | 2025-06-01 23:54:21.248165 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-01 23:54:21.248169 | orchestrator | Sunday 01 June 2025 23:53:20 +0000 (0:00:00.288) 0:10:04.257 *********** 2025-06-01 23:54:21.248173 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248177 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248181 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248185 | orchestrator | 2025-06-01 23:54:21.248189 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-01 23:54:21.248193 | orchestrator | Sunday 01 June 2025 23:53:20 +0000 (0:00:00.323) 0:10:04.581 *********** 2025-06-01 23:54:21.248197 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248205 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248209 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248213 | orchestrator | 2025-06-01 23:54:21.248217 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-01 23:54:21.248222 | orchestrator | Sunday 01 June 2025 23:53:21 +0000 (0:00:00.745) 0:10:05.326 *********** 2025-06-01 23:54:21.248226 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.248230 | orchestrator | 2025-06-01 23:54:21.248234 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-01 23:54:21.248238 | orchestrator | Sunday 01 June 2025 23:53:21 +0000 (0:00:00.557) 0:10:05.883 *********** 2025-06-01 23:54:21.248242 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248246 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 23:54:21.248250 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:54:21.248255 | orchestrator | 2025-06-01 23:54:21.248259 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-01 23:54:21.248263 | orchestrator | Sunday 01 June 2025 23:53:23 +0000 (0:00:02.195) 0:10:08.079 *********** 2025-06-01 23:54:21.248267 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 23:54:21.248271 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-01 23:54:21.248275 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.248279 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 23:54:21.248283 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-01 23:54:21.248287 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.248291 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 23:54:21.248296 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-01 23:54:21.248300 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.248304 | orchestrator | 2025-06-01 23:54:21.248308 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-01 23:54:21.248312 | orchestrator | Sunday 01 June 2025 23:53:25 +0000 (0:00:01.459) 0:10:09.538 *********** 2025-06-01 23:54:21.248316 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248320 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248324 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248328 | orchestrator | 2025-06-01 23:54:21.248332 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-01 23:54:21.248336 | orchestrator | Sunday 01 June 2025 23:53:25 +0000 (0:00:00.351) 0:10:09.889 *********** 2025-06-01 23:54:21.248341 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.248345 | orchestrator | 2025-06-01 23:54:21.248349 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-01 23:54:21.248353 | orchestrator | Sunday 01 June 2025 23:53:26 +0000 (0:00:00.527) 0:10:10.417 *********** 2025-06-01 23:54:21.248358 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.248365 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.248372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.248376 | orchestrator | 2025-06-01 23:54:21.248381 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-01 23:54:21.248385 | orchestrator | Sunday 01 June 2025 23:53:27 +0000 (0:00:01.341) 0:10:11.759 *********** 2025-06-01 23:54:21.248389 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248393 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 23:54:21.248401 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248405 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 23:54:21.248409 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248413 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-01 23:54:21.248418 | orchestrator | 2025-06-01 23:54:21.248422 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-01 23:54:21.248426 | orchestrator | Sunday 01 June 2025 23:53:31 +0000 (0:00:04.211) 0:10:15.970 *********** 2025-06-01 23:54:21.248430 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248434 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:54:21.248438 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248442 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:54:21.248446 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:54:21.248450 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:54:21.248455 | orchestrator | 2025-06-01 23:54:21.248459 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-01 23:54:21.248463 | orchestrator | Sunday 01 June 2025 23:53:33 +0000 (0:00:02.157) 0:10:18.128 *********** 2025-06-01 23:54:21.248467 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-01 23:54:21.248471 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.248475 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-01 23:54:21.248479 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.248483 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-01 23:54:21.248487 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.248491 | orchestrator | 2025-06-01 23:54:21.248494 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-01 23:54:21.248498 | orchestrator | Sunday 01 June 2025 23:53:35 +0000 (0:00:01.166) 0:10:19.294 *********** 2025-06-01 23:54:21.248502 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-01 23:54:21.248506 | orchestrator | 2025-06-01 23:54:21.248510 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-01 23:54:21.248513 | orchestrator | Sunday 01 June 2025 23:53:35 +0000 (0:00:00.226) 0:10:19.520 *********** 2025-06-01 23:54:21.248517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248537 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248541 | orchestrator | 2025-06-01 23:54:21.248544 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-01 23:54:21.248548 | orchestrator | Sunday 01 June 2025 23:53:36 +0000 (0:00:01.124) 0:10:20.645 *********** 2025-06-01 23:54:21.248552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-01 23:54:21.248575 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248579 | orchestrator | 2025-06-01 23:54:21.248588 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-01 23:54:21.248592 | orchestrator | Sunday 01 June 2025 23:53:37 +0000 (0:00:00.572) 0:10:21.217 *********** 2025-06-01 23:54:21.248596 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 23:54:21.248600 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 23:54:21.248604 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 23:54:21.248608 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 23:54:21.248612 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-01 23:54:21.248615 | orchestrator | 2025-06-01 23:54:21.248619 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-01 23:54:21.248623 | orchestrator | Sunday 01 June 2025 23:54:07 +0000 (0:00:30.522) 0:10:51.739 *********** 2025-06-01 23:54:21.248627 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248631 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248634 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248638 | orchestrator | 2025-06-01 23:54:21.248642 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-01 23:54:21.248646 | orchestrator | Sunday 01 June 2025 23:54:07 +0000 (0:00:00.318) 0:10:52.058 *********** 2025-06-01 23:54:21.248649 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248653 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248657 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248661 | orchestrator | 2025-06-01 23:54:21.248664 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-01 23:54:21.248668 | orchestrator | Sunday 01 June 2025 23:54:08 +0000 (0:00:00.319) 0:10:52.378 *********** 2025-06-01 23:54:21.248672 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.248676 | orchestrator | 2025-06-01 23:54:21.248680 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-01 23:54:21.248683 | orchestrator | Sunday 01 June 2025 23:54:09 +0000 (0:00:00.771) 0:10:53.149 *********** 2025-06-01 23:54:21.248687 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.248691 | orchestrator | 2025-06-01 23:54:21.248695 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-01 23:54:21.248698 | orchestrator | Sunday 01 June 2025 23:54:09 +0000 (0:00:00.554) 0:10:53.703 *********** 2025-06-01 23:54:21.248702 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.248710 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.248714 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.248717 | orchestrator | 2025-06-01 23:54:21.248721 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-01 23:54:21.248725 | orchestrator | Sunday 01 June 2025 23:54:10 +0000 (0:00:01.197) 0:10:54.901 *********** 2025-06-01 23:54:21.248729 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.248732 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.248736 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.248740 | orchestrator | 2025-06-01 23:54:21.248744 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-01 23:54:21.248748 | orchestrator | Sunday 01 June 2025 23:54:12 +0000 (0:00:01.431) 0:10:56.333 *********** 2025-06-01 23:54:21.248751 | orchestrator | changed: [testbed-node-3] 2025-06-01 23:54:21.248755 | orchestrator | changed: [testbed-node-4] 2025-06-01 23:54:21.248759 | orchestrator | changed: [testbed-node-5] 2025-06-01 23:54:21.248763 | orchestrator | 2025-06-01 23:54:21.248766 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-01 23:54:21.248770 | orchestrator | Sunday 01 June 2025 23:54:13 +0000 (0:00:01.774) 0:10:58.107 *********** 2025-06-01 23:54:21.248774 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.248778 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.248781 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-01 23:54:21.248785 | orchestrator | 2025-06-01 23:54:21.248789 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-01 23:54:21.248793 | orchestrator | Sunday 01 June 2025 23:54:16 +0000 (0:00:02.731) 0:11:00.838 *********** 2025-06-01 23:54:21.248796 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248800 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248804 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248808 | orchestrator | 2025-06-01 23:54:21.248812 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-01 23:54:21.248815 | orchestrator | Sunday 01 June 2025 23:54:17 +0000 (0:00:00.350) 0:11:01.188 *********** 2025-06-01 23:54:21.248819 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:54:21.248823 | orchestrator | 2025-06-01 23:54:21.248832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-01 23:54:21.248836 | orchestrator | Sunday 01 June 2025 23:54:17 +0000 (0:00:00.491) 0:11:01.680 *********** 2025-06-01 23:54:21.248840 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248843 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248847 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248851 | orchestrator | 2025-06-01 23:54:21.248855 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-01 23:54:21.248859 | orchestrator | Sunday 01 June 2025 23:54:18 +0000 (0:00:00.571) 0:11:02.252 *********** 2025-06-01 23:54:21.248862 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248866 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:54:21.248870 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:54:21.248873 | orchestrator | 2025-06-01 23:54:21.248877 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-01 23:54:21.248881 | orchestrator | Sunday 01 June 2025 23:54:18 +0000 (0:00:00.342) 0:11:02.594 *********** 2025-06-01 23:54:21.248885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:54:21.248888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:54:21.248892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:54:21.248896 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:54:21.248903 | orchestrator | 2025-06-01 23:54:21.248907 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-01 23:54:21.248923 | orchestrator | Sunday 01 June 2025 23:54:19 +0000 (0:00:00.602) 0:11:03.197 *********** 2025-06-01 23:54:21.248929 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:54:21.248933 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:54:21.248936 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:54:21.248940 | orchestrator | 2025-06-01 23:54:21.248944 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:54:21.248948 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-01 23:54:21.248952 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-01 23:54:21.248955 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-01 23:54:21.248959 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-01 23:54:21.248963 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-01 23:54:21.248967 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-01 23:54:21.248971 | orchestrator | 2025-06-01 23:54:21.248974 | orchestrator | 2025-06-01 23:54:21.248978 | orchestrator | 2025-06-01 23:54:21.248982 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:54:21.248986 | orchestrator | Sunday 01 June 2025 23:54:19 +0000 (0:00:00.284) 0:11:03.482 *********** 2025-06-01 23:54:21.248990 | orchestrator | =============================================================================== 2025-06-01 23:54:21.248993 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 56.58s 2025-06-01 23:54:21.248997 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.51s 2025-06-01 23:54:21.249001 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.52s 2025-06-01 23:54:21.249005 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.44s 2025-06-01 23:54:21.249008 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2025-06-01 23:54:21.249012 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.43s 2025-06-01 23:54:21.249016 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.92s 2025-06-01 23:54:21.249020 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.19s 2025-06-01 23:54:21.249023 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.81s 2025-06-01 23:54:21.249027 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.48s 2025-06-01 23:54:21.249031 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.37s 2025-06-01 23:54:21.249034 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.37s 2025-06-01 23:54:21.249038 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.57s 2025-06-01 23:54:21.249042 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.21s 2025-06-01 23:54:21.249046 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.01s 2025-06-01 23:54:21.249049 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.92s 2025-06-01 23:54:21.249053 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.71s 2025-06-01 23:54:21.249057 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.63s 2025-06-01 23:54:21.249084 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.49s 2025-06-01 23:54:21.249087 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.24s 2025-06-01 23:54:21.249096 | orchestrator | 2025-06-01 23:54:21 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:21.249100 | orchestrator | 2025-06-01 23:54:21 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:21.249104 | orchestrator | 2025-06-01 23:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:24.286664 | orchestrator | 2025-06-01 23:54:24 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:24.288793 | orchestrator | 2025-06-01 23:54:24 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:24.290190 | orchestrator | 2025-06-01 23:54:24 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:24.290242 | orchestrator | 2025-06-01 23:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:27.347738 | orchestrator | 2025-06-01 23:54:27 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:27.350115 | orchestrator | 2025-06-01 23:54:27 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:27.352015 | orchestrator | 2025-06-01 23:54:27 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:27.352069 | orchestrator | 2025-06-01 23:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:30.400887 | orchestrator | 2025-06-01 23:54:30 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:30.403535 | orchestrator | 2025-06-01 23:54:30 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:30.405473 | orchestrator | 2025-06-01 23:54:30 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:30.405860 | orchestrator | 2025-06-01 23:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:33.456098 | orchestrator | 2025-06-01 23:54:33 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:33.457317 | orchestrator | 2025-06-01 23:54:33 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:33.458330 | orchestrator | 2025-06-01 23:54:33 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:33.458381 | orchestrator | 2025-06-01 23:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:36.498877 | orchestrator | 2025-06-01 23:54:36 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:36.500145 | orchestrator | 2025-06-01 23:54:36 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:36.502871 | orchestrator | 2025-06-01 23:54:36 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:36.502898 | orchestrator | 2025-06-01 23:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:39.546373 | orchestrator | 2025-06-01 23:54:39 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:39.551036 | orchestrator | 2025-06-01 23:54:39 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:39.553029 | orchestrator | 2025-06-01 23:54:39 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:39.553378 | orchestrator | 2025-06-01 23:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:42.595738 | orchestrator | 2025-06-01 23:54:42 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:42.596568 | orchestrator | 2025-06-01 23:54:42 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:42.597873 | orchestrator | 2025-06-01 23:54:42 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:42.597896 | orchestrator | 2025-06-01 23:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:45.640469 | orchestrator | 2025-06-01 23:54:45 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:45.642132 | orchestrator | 2025-06-01 23:54:45 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:45.644214 | orchestrator | 2025-06-01 23:54:45 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:45.644256 | orchestrator | 2025-06-01 23:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:48.695754 | orchestrator | 2025-06-01 23:54:48 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:48.699076 | orchestrator | 2025-06-01 23:54:48 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:48.703089 | orchestrator | 2025-06-01 23:54:48 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:48.703121 | orchestrator | 2025-06-01 23:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:51.762745 | orchestrator | 2025-06-01 23:54:51 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:51.764724 | orchestrator | 2025-06-01 23:54:51 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:51.766387 | orchestrator | 2025-06-01 23:54:51 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:51.766490 | orchestrator | 2025-06-01 23:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:54.820997 | orchestrator | 2025-06-01 23:54:54 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:54.823191 | orchestrator | 2025-06-01 23:54:54 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:54.826186 | orchestrator | 2025-06-01 23:54:54 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:54.826245 | orchestrator | 2025-06-01 23:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:54:57.877240 | orchestrator | 2025-06-01 23:54:57 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:54:57.880300 | orchestrator | 2025-06-01 23:54:57 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:54:57.882574 | orchestrator | 2025-06-01 23:54:57 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:54:57.882627 | orchestrator | 2025-06-01 23:54:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:00.930695 | orchestrator | 2025-06-01 23:55:00 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:00.932303 | orchestrator | 2025-06-01 23:55:00 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:00.933860 | orchestrator | 2025-06-01 23:55:00 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:00.933938 | orchestrator | 2025-06-01 23:55:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:03.985448 | orchestrator | 2025-06-01 23:55:03 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:03.987711 | orchestrator | 2025-06-01 23:55:03 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:03.989589 | orchestrator | 2025-06-01 23:55:03 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:03.989648 | orchestrator | 2025-06-01 23:55:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:07.054330 | orchestrator | 2025-06-01 23:55:07 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:07.056290 | orchestrator | 2025-06-01 23:55:07 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:07.058505 | orchestrator | 2025-06-01 23:55:07 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:07.058543 | orchestrator | 2025-06-01 23:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:10.116636 | orchestrator | 2025-06-01 23:55:10 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:10.119494 | orchestrator | 2025-06-01 23:55:10 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:10.121425 | orchestrator | 2025-06-01 23:55:10 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:10.121642 | orchestrator | 2025-06-01 23:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:13.177309 | orchestrator | 2025-06-01 23:55:13 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:13.179218 | orchestrator | 2025-06-01 23:55:13 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:13.181373 | orchestrator | 2025-06-01 23:55:13 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:13.181519 | orchestrator | 2025-06-01 23:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:16.250834 | orchestrator | 2025-06-01 23:55:16 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:16.252158 | orchestrator | 2025-06-01 23:55:16 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:16.253564 | orchestrator | 2025-06-01 23:55:16 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:16.253590 | orchestrator | 2025-06-01 23:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:19.309720 | orchestrator | 2025-06-01 23:55:19 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:19.311385 | orchestrator | 2025-06-01 23:55:19 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:19.313154 | orchestrator | 2025-06-01 23:55:19 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:19.313180 | orchestrator | 2025-06-01 23:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:22.363577 | orchestrator | 2025-06-01 23:55:22 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state STARTED 2025-06-01 23:55:22.364868 | orchestrator | 2025-06-01 23:55:22 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:22.367040 | orchestrator | 2025-06-01 23:55:22 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:22.367229 | orchestrator | 2025-06-01 23:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:25.426648 | orchestrator | 2025-06-01 23:55:25 | INFO  | Task df8ec50c-71d9-4046-ad33-b607b7b95f49 is in state SUCCESS 2025-06-01 23:55:25.428280 | orchestrator | 2025-06-01 23:55:25.428337 | orchestrator | 2025-06-01 23:55:25.428358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:55:25.428378 | orchestrator | 2025-06-01 23:55:25.428397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:55:25.428416 | orchestrator | Sunday 01 June 2025 23:52:26 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-01 23:55:25.428434 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:25.428453 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:25.428470 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:25.428488 | orchestrator | 2025-06-01 23:55:25.428505 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:55:25.428523 | orchestrator | Sunday 01 June 2025 23:52:27 +0000 (0:00:00.310) 0:00:00.570 *********** 2025-06-01 23:55:25.428541 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-01 23:55:25.428560 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-01 23:55:25.428578 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-01 23:55:25.428596 | orchestrator | 2025-06-01 23:55:25.428614 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-01 23:55:25.428631 | orchestrator | 2025-06-01 23:55:25.428649 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 23:55:25.428667 | orchestrator | Sunday 01 June 2025 23:52:27 +0000 (0:00:00.420) 0:00:00.991 *********** 2025-06-01 23:55:25.428685 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:55:25.428703 | orchestrator | 2025-06-01 23:55:25.428721 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-01 23:55:25.428738 | orchestrator | Sunday 01 June 2025 23:52:28 +0000 (0:00:00.515) 0:00:01.506 *********** 2025-06-01 23:55:25.428755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 23:55:25.428772 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 23:55:25.428787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-01 23:55:25.428806 | orchestrator | 2025-06-01 23:55:25.428822 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-01 23:55:25.428837 | orchestrator | Sunday 01 June 2025 23:52:28 +0000 (0:00:00.642) 0:00:02.148 *********** 2025-06-01 23:55:25.428858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.428930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.428989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429099 | orchestrator | 2025-06-01 23:55:25.429117 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 23:55:25.429133 | orchestrator | Sunday 01 June 2025 23:52:30 +0000 (0:00:01.800) 0:00:03.949 *********** 2025-06-01 23:55:25.429148 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:55:25.429164 | orchestrator | 2025-06-01 23:55:25.429180 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-01 23:55:25.429195 | orchestrator | Sunday 01 June 2025 23:52:31 +0000 (0:00:00.579) 0:00:04.529 *********** 2025-06-01 23:55:25.429226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429349 | orchestrator | 2025-06-01 23:55:25.429364 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-01 23:55:25.429379 | orchestrator | Sunday 01 June 2025 23:52:34 +0000 (0:00:02.805) 0:00:07.335 *********** 2025-06-01 23:55:25.429394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:55:25.429416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:55:25.429440 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:25.429455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:55:25.429478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:55:25.429494 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:25.429509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:55:25.429529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:55:25.429553 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:25.429569 | orchestrator | 2025-06-01 23:55:25.429584 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-01 23:55:25.429598 | orchestrator | Sunday 01 June 2025 23:52:35 +0000 (0:00:01.273) 0:00:08.608 *********** 2025-06-01 23:55:25.429614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:55:25.429638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:55:25.429655 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:25.429670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:55:25.429692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:55:25.429714 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:25.429729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-01 23:55:25.429753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-01 23:55:25.429769 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:25.429783 | orchestrator | 2025-06-01 23:55:25.429797 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-01 23:55:25.429810 | orchestrator | Sunday 01 June 2025 23:52:36 +0000 (0:00:01.093) 0:00:09.701 *********** 2025-06-01 23:55:25.429823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.429921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.429980 | orchestrator | 2025-06-01 23:55:25.429994 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-01 23:55:25.430009 | orchestrator | Sunday 01 June 2025 23:52:38 +0000 (0:00:02.150) 0:00:11.852 *********** 2025-06-01 23:55:25.430095 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:25.430109 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:25.430124 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:25.430139 | orchestrator | 2025-06-01 23:55:25.430159 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-01 23:55:25.430174 | orchestrator | Sunday 01 June 2025 23:52:41 +0000 (0:00:03.184) 0:00:15.037 *********** 2025-06-01 23:55:25.430188 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:25.430203 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:25.430218 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:25.430233 | orchestrator | 2025-06-01 23:55:25.430247 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-01 23:55:25.430261 | orchestrator | Sunday 01 June 2025 23:52:43 +0000 (0:00:01.445) 0:00:16.482 *********** 2025-06-01 23:55:25.430276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.430301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.430317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-01 23:55:25.430342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.430363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.430390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-01 23:55:25.430406 | orchestrator | 2025-06-01 23:55:25.430420 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 23:55:25.430434 | orchestrator | Sunday 01 June 2025 23:52:45 +0000 (0:00:02.135) 0:00:18.618 *********** 2025-06-01 23:55:25.430449 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:25.430463 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:25.430477 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:25.430500 | orchestrator | 2025-06-01 23:55:25.430514 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 23:55:25.430529 | orchestrator | Sunday 01 June 2025 23:52:45 +0000 (0:00:00.295) 0:00:18.913 *********** 2025-06-01 23:55:25.430543 | orchestrator | 2025-06-01 23:55:25.430556 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 23:55:25.430570 | orchestrator | Sunday 01 June 2025 23:52:45 +0000 (0:00:00.062) 0:00:18.976 *********** 2025-06-01 23:55:25.430584 | orchestrator | 2025-06-01 23:55:25.430598 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-01 23:55:25.430612 | orchestrator | Sunday 01 June 2025 23:52:45 +0000 (0:00:00.062) 0:00:19.038 *********** 2025-06-01 23:55:25.430627 | orchestrator | 2025-06-01 23:55:25.430641 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-01 23:55:25.430656 | orchestrator | Sunday 01 June 2025 23:52:46 +0000 (0:00:00.256) 0:00:19.294 *********** 2025-06-01 23:55:25.430669 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:25.430683 | orchestrator | 2025-06-01 23:55:25.430698 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-01 23:55:25.430713 | orchestrator | Sunday 01 June 2025 23:52:46 +0000 (0:00:00.227) 0:00:19.522 *********** 2025-06-01 23:55:25.430727 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:25.430741 | orchestrator | 2025-06-01 23:55:25.430755 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-01 23:55:25.430769 | orchestrator | Sunday 01 June 2025 23:52:46 +0000 (0:00:00.226) 0:00:19.748 *********** 2025-06-01 23:55:25.430783 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:25.430797 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:25.430811 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:25.430824 | orchestrator | 2025-06-01 23:55:25.430837 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-01 23:55:25.430851 | orchestrator | Sunday 01 June 2025 23:53:49 +0000 (0:01:03.447) 0:01:23.196 *********** 2025-06-01 23:55:25.430864 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:25.430877 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:25.430947 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:25.430962 | orchestrator | 2025-06-01 23:55:25.430977 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-01 23:55:25.430991 | orchestrator | Sunday 01 June 2025 23:55:13 +0000 (0:01:23.898) 0:02:47.094 *********** 2025-06-01 23:55:25.431005 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:55:25.431020 | orchestrator | 2025-06-01 23:55:25.431035 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-01 23:55:25.431056 | orchestrator | Sunday 01 June 2025 23:55:14 +0000 (0:00:00.688) 0:02:47.783 *********** 2025-06-01 23:55:25.431071 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:25.431086 | orchestrator | 2025-06-01 23:55:25.431100 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-01 23:55:25.431114 | orchestrator | Sunday 01 June 2025 23:55:16 +0000 (0:00:02.232) 0:02:50.015 *********** 2025-06-01 23:55:25.431128 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:25.431140 | orchestrator | 2025-06-01 23:55:25.431155 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-01 23:55:25.431169 | orchestrator | Sunday 01 June 2025 23:55:18 +0000 (0:00:02.211) 0:02:52.227 *********** 2025-06-01 23:55:25.431183 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:25.431196 | orchestrator | 2025-06-01 23:55:25.431209 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-01 23:55:25.431223 | orchestrator | Sunday 01 June 2025 23:55:21 +0000 (0:00:02.567) 0:02:54.794 *********** 2025-06-01 23:55:25.431237 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:25.431252 | orchestrator | 2025-06-01 23:55:25.431267 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:55:25.431283 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-01 23:55:25.431308 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:55:25.431323 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-01 23:55:25.431338 | orchestrator | 2025-06-01 23:55:25.431353 | orchestrator | 2025-06-01 23:55:25.431368 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:55:25.431390 | orchestrator | Sunday 01 June 2025 23:55:23 +0000 (0:00:02.398) 0:02:57.192 *********** 2025-06-01 23:55:25.431404 | orchestrator | =============================================================================== 2025-06-01 23:55:25.431417 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.90s 2025-06-01 23:55:25.431432 | orchestrator | opensearch : Restart opensearch container ------------------------------ 63.45s 2025-06-01 23:55:25.431447 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.18s 2025-06-01 23:55:25.431461 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.81s 2025-06-01 23:55:25.431474 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.57s 2025-06-01 23:55:25.431489 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.40s 2025-06-01 23:55:25.431504 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.23s 2025-06-01 23:55:25.431518 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2025-06-01 23:55:25.431532 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.15s 2025-06-01 23:55:25.431546 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.14s 2025-06-01 23:55:25.431560 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.80s 2025-06-01 23:55:25.431575 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.45s 2025-06-01 23:55:25.431590 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.27s 2025-06-01 23:55:25.431604 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2025-06-01 23:55:25.431619 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.69s 2025-06-01 23:55:25.431633 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-06-01 23:55:25.431646 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.58s 2025-06-01 23:55:25.431660 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-06-01 23:55:25.431673 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-06-01 23:55:25.431688 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.38s 2025-06-01 23:55:25.433019 | orchestrator | 2025-06-01 23:55:25 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:25.433060 | orchestrator | 2025-06-01 23:55:25 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:25.433073 | orchestrator | 2025-06-01 23:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:28.477609 | orchestrator | 2025-06-01 23:55:28 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:28.478232 | orchestrator | 2025-06-01 23:55:28 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:28.478268 | orchestrator | 2025-06-01 23:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:31.528406 | orchestrator | 2025-06-01 23:55:31 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:31.530453 | orchestrator | 2025-06-01 23:55:31 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:31.530530 | orchestrator | 2025-06-01 23:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:34.574135 | orchestrator | 2025-06-01 23:55:34 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:34.574280 | orchestrator | 2025-06-01 23:55:34 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:34.574294 | orchestrator | 2025-06-01 23:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:37.625975 | orchestrator | 2025-06-01 23:55:37 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state STARTED 2025-06-01 23:55:37.626635 | orchestrator | 2025-06-01 23:55:37 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:37.626799 | orchestrator | 2025-06-01 23:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:40.694167 | orchestrator | 2025-06-01 23:55:40 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:40.696934 | orchestrator | 2025-06-01 23:55:40 | INFO  | Task 28c76449-029d-410f-862d-669a3a67231b is in state SUCCESS 2025-06-01 23:55:40.699469 | orchestrator | 2025-06-01 23:55:40.699532 | orchestrator | 2025-06-01 23:55:40.699553 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-01 23:55:40.699573 | orchestrator | 2025-06-01 23:55:40.699590 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-01 23:55:40.699609 | orchestrator | Sunday 01 June 2025 23:52:26 +0000 (0:00:00.105) 0:00:00.105 *********** 2025-06-01 23:55:40.699945 | orchestrator | ok: [localhost] => { 2025-06-01 23:55:40.699976 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-01 23:55:40.699997 | orchestrator | } 2025-06-01 23:55:40.700018 | orchestrator | 2025-06-01 23:55:40.700037 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-01 23:55:40.700056 | orchestrator | Sunday 01 June 2025 23:52:26 +0000 (0:00:00.040) 0:00:00.145 *********** 2025-06-01 23:55:40.700076 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-01 23:55:40.700097 | orchestrator | ...ignoring 2025-06-01 23:55:40.700116 | orchestrator | 2025-06-01 23:55:40.700135 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-01 23:55:40.700154 | orchestrator | Sunday 01 June 2025 23:52:29 +0000 (0:00:02.835) 0:00:02.980 *********** 2025-06-01 23:55:40.700173 | orchestrator | skipping: [localhost] 2025-06-01 23:55:40.700192 | orchestrator | 2025-06-01 23:55:40.700211 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-01 23:55:40.700230 | orchestrator | Sunday 01 June 2025 23:52:29 +0000 (0:00:00.056) 0:00:03.037 *********** 2025-06-01 23:55:40.700248 | orchestrator | ok: [localhost] 2025-06-01 23:55:40.700267 | orchestrator | 2025-06-01 23:55:40.700285 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:55:40.700304 | orchestrator | 2025-06-01 23:55:40.700323 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:55:40.700342 | orchestrator | Sunday 01 June 2025 23:52:29 +0000 (0:00:00.179) 0:00:03.217 *********** 2025-06-01 23:55:40.700361 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.700380 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.700399 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.700418 | orchestrator | 2025-06-01 23:55:40.700438 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:55:40.700457 | orchestrator | Sunday 01 June 2025 23:52:30 +0000 (0:00:00.317) 0:00:03.534 *********** 2025-06-01 23:55:40.700476 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-01 23:55:40.700543 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-01 23:55:40.700563 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-01 23:55:40.700581 | orchestrator | 2025-06-01 23:55:40.700599 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-01 23:55:40.700617 | orchestrator | 2025-06-01 23:55:40.700636 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-01 23:55:40.700656 | orchestrator | Sunday 01 June 2025 23:52:31 +0000 (0:00:00.790) 0:00:04.325 *********** 2025-06-01 23:55:40.700674 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-01 23:55:40.700693 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-01 23:55:40.700714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-01 23:55:40.700733 | orchestrator | 2025-06-01 23:55:40.700751 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 23:55:40.700770 | orchestrator | Sunday 01 June 2025 23:52:31 +0000 (0:00:00.428) 0:00:04.753 *********** 2025-06-01 23:55:40.700789 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:55:40.700810 | orchestrator | 2025-06-01 23:55:40.700830 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-01 23:55:40.700850 | orchestrator | Sunday 01 June 2025 23:52:32 +0000 (0:00:00.683) 0:00:05.437 *********** 2025-06-01 23:55:40.700956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.700986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.701032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.701053 | orchestrator | 2025-06-01 23:55:40.701085 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-01 23:55:40.701105 | orchestrator | Sunday 01 June 2025 23:52:35 +0000 (0:00:03.125) 0:00:08.562 *********** 2025-06-01 23:55:40.701124 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.701145 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.701355 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.701381 | orchestrator | 2025-06-01 23:55:40.701400 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-01 23:55:40.701420 | orchestrator | Sunday 01 June 2025 23:52:36 +0000 (0:00:00.890) 0:00:09.453 *********** 2025-06-01 23:55:40.701440 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.701461 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.701481 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.701501 | orchestrator | 2025-06-01 23:55:40.701521 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-01 23:55:40.701542 | orchestrator | Sunday 01 June 2025 23:52:37 +0000 (0:00:01.515) 0:00:10.968 *********** 2025-06-01 23:55:40.701578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.701623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.701649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.701683 | orchestrator | 2025-06-01 23:55:40.701704 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-01 23:55:40.701725 | orchestrator | Sunday 01 June 2025 23:52:41 +0000 (0:00:03.656) 0:00:14.624 *********** 2025-06-01 23:55:40.701745 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.701766 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.701786 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.701807 | orchestrator | 2025-06-01 23:55:40.701827 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-01 23:55:40.701847 | orchestrator | Sunday 01 June 2025 23:52:42 +0000 (0:00:01.072) 0:00:15.697 *********** 2025-06-01 23:55:40.701867 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.701955 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:40.701975 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:40.701994 | orchestrator | 2025-06-01 23:55:40.702012 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 23:55:40.702108 | orchestrator | Sunday 01 June 2025 23:52:46 +0000 (0:00:03.913) 0:00:19.610 *********** 2025-06-01 23:55:40.702131 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:55:40.702151 | orchestrator | 2025-06-01 23:55:40.702173 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-01 23:55:40.702192 | orchestrator | Sunday 01 June 2025 23:52:46 +0000 (0:00:00.564) 0:00:20.174 *********** 2025-06-01 23:55:40.702246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702285 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.702309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702332 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.702373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702406 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.702428 | orchestrator | 2025-06-01 23:55:40.702449 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-01 23:55:40.702469 | orchestrator | Sunday 01 June 2025 23:52:50 +0000 (0:00:03.413) 0:00:23.587 *********** 2025-06-01 23:55:40.702511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702547 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.702584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702616 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.702637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702656 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.702675 | orchestrator | 2025-06-01 23:55:40.702694 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-01 23:55:40.702714 | orchestrator | Sunday 01 June 2025 23:52:53 +0000 (0:00:02.856) 0:00:26.444 *********** 2025-06-01 23:55:40.702743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702785 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.702818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702839 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.702866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-01 23:55:40.702971 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.702991 | orchestrator | 2025-06-01 23:55:40.703011 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-01 23:55:40.703029 | orchestrator | Sunday 01 June 2025 23:52:56 +0000 (0:00:03.142) 0:00:29.587 *********** 2025-06-01 23:55:40.703060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.703088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.703129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-01 23:55:40.703148 | orchestrator | 2025-06-01 23:55:40.703164 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-01 23:55:40.703181 | orchestrator | Sunday 01 June 2025 23:52:59 +0000 (0:00:03.403) 0:00:32.990 *********** 2025-06-01 23:55:40.703198 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.703214 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:40.703232 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:40.703248 | orchestrator | 2025-06-01 23:55:40.703264 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-01 23:55:40.703282 | orchestrator | Sunday 01 June 2025 23:53:00 +0000 (0:00:01.188) 0:00:34.178 *********** 2025-06-01 23:55:40.703299 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.703317 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.703334 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.703350 | orchestrator | 2025-06-01 23:55:40.703367 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-01 23:55:40.703384 | orchestrator | Sunday 01 June 2025 23:53:01 +0000 (0:00:00.400) 0:00:34.579 *********** 2025-06-01 23:55:40.703399 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.703414 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.703430 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.703445 | orchestrator | 2025-06-01 23:55:40.703461 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-01 23:55:40.703477 | orchestrator | Sunday 01 June 2025 23:53:01 +0000 (0:00:00.310) 0:00:34.889 *********** 2025-06-01 23:55:40.703496 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-01 23:55:40.703512 | orchestrator | ...ignoring 2025-06-01 23:55:40.703527 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-01 23:55:40.703543 | orchestrator | ...ignoring 2025-06-01 23:55:40.703559 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-01 23:55:40.703589 | orchestrator | ...ignoring 2025-06-01 23:55:40.703605 | orchestrator | 2025-06-01 23:55:40.703621 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-01 23:55:40.703639 | orchestrator | Sunday 01 June 2025 23:53:12 +0000 (0:00:10.880) 0:00:45.770 *********** 2025-06-01 23:55:40.703656 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.703672 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.703689 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.703705 | orchestrator | 2025-06-01 23:55:40.703722 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-01 23:55:40.703739 | orchestrator | Sunday 01 June 2025 23:53:13 +0000 (0:00:00.647) 0:00:46.417 *********** 2025-06-01 23:55:40.703755 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.703773 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.703789 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.703806 | orchestrator | 2025-06-01 23:55:40.703831 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-01 23:55:40.703847 | orchestrator | Sunday 01 June 2025 23:53:13 +0000 (0:00:00.416) 0:00:46.834 *********** 2025-06-01 23:55:40.703862 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.703902 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.703921 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.703939 | orchestrator | 2025-06-01 23:55:40.703958 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-01 23:55:40.703976 | orchestrator | Sunday 01 June 2025 23:53:13 +0000 (0:00:00.428) 0:00:47.262 *********** 2025-06-01 23:55:40.703995 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.704013 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.704031 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.704050 | orchestrator | 2025-06-01 23:55:40.704069 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-01 23:55:40.704088 | orchestrator | Sunday 01 June 2025 23:53:14 +0000 (0:00:00.401) 0:00:47.664 *********** 2025-06-01 23:55:40.704104 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.704296 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.704314 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.704330 | orchestrator | 2025-06-01 23:55:40.704345 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-01 23:55:40.704360 | orchestrator | Sunday 01 June 2025 23:53:15 +0000 (0:00:00.662) 0:00:48.327 *********** 2025-06-01 23:55:40.704388 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.704421 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.704435 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.704450 | orchestrator | 2025-06-01 23:55:40.704465 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 23:55:40.704480 | orchestrator | Sunday 01 June 2025 23:53:15 +0000 (0:00:00.419) 0:00:48.746 *********** 2025-06-01 23:55:40.704494 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.704510 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.704524 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-01 23:55:40.704539 | orchestrator | 2025-06-01 23:55:40.704553 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-01 23:55:40.704568 | orchestrator | Sunday 01 June 2025 23:53:15 +0000 (0:00:00.370) 0:00:49.117 *********** 2025-06-01 23:55:40.704583 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.704598 | orchestrator | 2025-06-01 23:55:40.704614 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-01 23:55:40.704631 | orchestrator | Sunday 01 June 2025 23:53:25 +0000 (0:00:10.139) 0:00:59.257 *********** 2025-06-01 23:55:40.704646 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.704662 | orchestrator | 2025-06-01 23:55:40.704677 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-01 23:55:40.704691 | orchestrator | Sunday 01 June 2025 23:53:26 +0000 (0:00:00.127) 0:00:59.385 *********** 2025-06-01 23:55:40.704723 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.704740 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.704755 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.704770 | orchestrator | 2025-06-01 23:55:40.704786 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-01 23:55:40.704801 | orchestrator | Sunday 01 June 2025 23:53:27 +0000 (0:00:01.018) 0:01:00.403 *********** 2025-06-01 23:55:40.704817 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.704832 | orchestrator | 2025-06-01 23:55:40.704847 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-01 23:55:40.704863 | orchestrator | Sunday 01 June 2025 23:53:34 +0000 (0:00:07.811) 0:01:08.215 *********** 2025-06-01 23:55:40.704902 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.704920 | orchestrator | 2025-06-01 23:55:40.704937 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-01 23:55:40.704954 | orchestrator | Sunday 01 June 2025 23:53:36 +0000 (0:00:01.619) 0:01:09.835 *********** 2025-06-01 23:55:40.704971 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.704988 | orchestrator | 2025-06-01 23:55:40.705005 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-01 23:55:40.705022 | orchestrator | Sunday 01 June 2025 23:53:39 +0000 (0:00:02.561) 0:01:12.396 *********** 2025-06-01 23:55:40.705038 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.705055 | orchestrator | 2025-06-01 23:55:40.705072 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-01 23:55:40.705089 | orchestrator | Sunday 01 June 2025 23:53:39 +0000 (0:00:00.119) 0:01:12.516 *********** 2025-06-01 23:55:40.705105 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.705122 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.705139 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.705156 | orchestrator | 2025-06-01 23:55:40.705173 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-01 23:55:40.705190 | orchestrator | Sunday 01 June 2025 23:53:39 +0000 (0:00:00.506) 0:01:13.023 *********** 2025-06-01 23:55:40.705207 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.705225 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-01 23:55:40.705242 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:40.705259 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:40.705276 | orchestrator | 2025-06-01 23:55:40.705291 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-01 23:55:40.705307 | orchestrator | skipping: no hosts matched 2025-06-01 23:55:40.705323 | orchestrator | 2025-06-01 23:55:40.705339 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 23:55:40.705355 | orchestrator | 2025-06-01 23:55:40.705371 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 23:55:40.705387 | orchestrator | Sunday 01 June 2025 23:53:40 +0000 (0:00:00.330) 0:01:13.353 *********** 2025-06-01 23:55:40.705403 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:55:40.705419 | orchestrator | 2025-06-01 23:55:40.705435 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 23:55:40.705451 | orchestrator | Sunday 01 June 2025 23:53:59 +0000 (0:00:19.293) 0:01:32.646 *********** 2025-06-01 23:55:40.705467 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.705483 | orchestrator | 2025-06-01 23:55:40.705511 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 23:55:40.705527 | orchestrator | Sunday 01 June 2025 23:54:20 +0000 (0:00:21.573) 0:01:54.220 *********** 2025-06-01 23:55:40.705543 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.705559 | orchestrator | 2025-06-01 23:55:40.705574 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-01 23:55:40.705589 | orchestrator | 2025-06-01 23:55:40.705605 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 23:55:40.705634 | orchestrator | Sunday 01 June 2025 23:54:23 +0000 (0:00:02.553) 0:01:56.773 *********** 2025-06-01 23:55:40.705650 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:55:40.705667 | orchestrator | 2025-06-01 23:55:40.705683 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 23:55:40.705699 | orchestrator | Sunday 01 June 2025 23:54:43 +0000 (0:00:19.906) 0:02:16.680 *********** 2025-06-01 23:55:40.705716 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.705732 | orchestrator | 2025-06-01 23:55:40.705748 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 23:55:40.705764 | orchestrator | Sunday 01 June 2025 23:55:04 +0000 (0:00:20.607) 0:02:37.287 *********** 2025-06-01 23:55:40.705780 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.705796 | orchestrator | 2025-06-01 23:55:40.705812 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-01 23:55:40.705829 | orchestrator | 2025-06-01 23:55:40.705857 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-01 23:55:40.705945 | orchestrator | Sunday 01 June 2025 23:55:06 +0000 (0:00:02.740) 0:02:40.027 *********** 2025-06-01 23:55:40.705966 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.705983 | orchestrator | 2025-06-01 23:55:40.706000 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-01 23:55:40.706054 | orchestrator | Sunday 01 June 2025 23:55:18 +0000 (0:00:11.543) 0:02:51.570 *********** 2025-06-01 23:55:40.706075 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.706093 | orchestrator | 2025-06-01 23:55:40.706111 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-01 23:55:40.706128 | orchestrator | Sunday 01 June 2025 23:55:22 +0000 (0:00:04.573) 0:02:56.144 *********** 2025-06-01 23:55:40.706146 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.706164 | orchestrator | 2025-06-01 23:55:40.706182 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-01 23:55:40.706199 | orchestrator | 2025-06-01 23:55:40.706217 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-01 23:55:40.706235 | orchestrator | Sunday 01 June 2025 23:55:25 +0000 (0:00:02.496) 0:02:58.640 *********** 2025-06-01 23:55:40.706253 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:55:40.706270 | orchestrator | 2025-06-01 23:55:40.706288 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-01 23:55:40.706306 | orchestrator | Sunday 01 June 2025 23:55:25 +0000 (0:00:00.528) 0:02:59.169 *********** 2025-06-01 23:55:40.706324 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.706341 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.706359 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.706377 | orchestrator | 2025-06-01 23:55:40.706394 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-01 23:55:40.706412 | orchestrator | Sunday 01 June 2025 23:55:28 +0000 (0:00:02.340) 0:03:01.510 *********** 2025-06-01 23:55:40.706430 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.706448 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.706466 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.706480 | orchestrator | 2025-06-01 23:55:40.706494 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-01 23:55:40.706508 | orchestrator | Sunday 01 June 2025 23:55:30 +0000 (0:00:02.020) 0:03:03.531 *********** 2025-06-01 23:55:40.706523 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.706537 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.706552 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.706566 | orchestrator | 2025-06-01 23:55:40.706579 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-01 23:55:40.706593 | orchestrator | Sunday 01 June 2025 23:55:32 +0000 (0:00:01.990) 0:03:05.521 *********** 2025-06-01 23:55:40.706607 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.706621 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.706651 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:55:40.706666 | orchestrator | 2025-06-01 23:55:40.706680 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-01 23:55:40.706694 | orchestrator | Sunday 01 June 2025 23:55:34 +0000 (0:00:01.997) 0:03:07.518 *********** 2025-06-01 23:55:40.706709 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:55:40.706723 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:55:40.706738 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:55:40.706752 | orchestrator | 2025-06-01 23:55:40.706766 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-01 23:55:40.706781 | orchestrator | Sunday 01 June 2025 23:55:37 +0000 (0:00:03.017) 0:03:10.536 *********** 2025-06-01 23:55:40.706795 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:55:40.706810 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:55:40.706824 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:55:40.706838 | orchestrator | 2025-06-01 23:55:40.706853 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:55:40.706869 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-01 23:55:40.706910 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-01 23:55:40.706926 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-01 23:55:40.706949 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-01 23:55:40.706966 | orchestrator | 2025-06-01 23:55:40.706981 | orchestrator | 2025-06-01 23:55:40.706997 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:55:40.707012 | orchestrator | Sunday 01 June 2025 23:55:37 +0000 (0:00:00.242) 0:03:10.778 *********** 2025-06-01 23:55:40.707027 | orchestrator | =============================================================================== 2025-06-01 23:55:40.707041 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.18s 2025-06-01 23:55:40.707055 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.20s 2025-06-01 23:55:40.707071 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.54s 2025-06-01 23:55:40.707086 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2025-06-01 23:55:40.707099 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.14s 2025-06-01 23:55:40.707111 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.81s 2025-06-01 23:55:40.707136 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.29s 2025-06-01 23:55:40.707149 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.57s 2025-06-01 23:55:40.707161 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.91s 2025-06-01 23:55:40.707173 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.66s 2025-06-01 23:55:40.707185 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.41s 2025-06-01 23:55:40.707197 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.40s 2025-06-01 23:55:40.707209 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.14s 2025-06-01 23:55:40.707221 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.13s 2025-06-01 23:55:40.707232 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.02s 2025-06-01 23:55:40.707245 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.86s 2025-06-01 23:55:40.707257 | orchestrator | Check MariaDB service --------------------------------------------------- 2.84s 2025-06-01 23:55:40.707281 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.56s 2025-06-01 23:55:40.707294 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.50s 2025-06-01 23:55:40.707305 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.34s 2025-06-01 23:55:40.707317 | orchestrator | 2025-06-01 23:55:40 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:40.707330 | orchestrator | 2025-06-01 23:55:40 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:40.707342 | orchestrator | 2025-06-01 23:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:43.745420 | orchestrator | 2025-06-01 23:55:43 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:43.745526 | orchestrator | 2025-06-01 23:55:43 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:43.745820 | orchestrator | 2025-06-01 23:55:43 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:43.745855 | orchestrator | 2025-06-01 23:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:46.787853 | orchestrator | 2025-06-01 23:55:46 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:46.790279 | orchestrator | 2025-06-01 23:55:46 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:46.794005 | orchestrator | 2025-06-01 23:55:46 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:46.794089 | orchestrator | 2025-06-01 23:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:49.840964 | orchestrator | 2025-06-01 23:55:49 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:49.842537 | orchestrator | 2025-06-01 23:55:49 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:49.844324 | orchestrator | 2025-06-01 23:55:49 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:49.844579 | orchestrator | 2025-06-01 23:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:52.886974 | orchestrator | 2025-06-01 23:55:52 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:52.888167 | orchestrator | 2025-06-01 23:55:52 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:52.888654 | orchestrator | 2025-06-01 23:55:52 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:52.888975 | orchestrator | 2025-06-01 23:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:55.931496 | orchestrator | 2025-06-01 23:55:55 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:55.931603 | orchestrator | 2025-06-01 23:55:55 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:55.931629 | orchestrator | 2025-06-01 23:55:55 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:55.931641 | orchestrator | 2025-06-01 23:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:55:58.968551 | orchestrator | 2025-06-01 23:55:58 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:55:58.968685 | orchestrator | 2025-06-01 23:55:58 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:55:58.969320 | orchestrator | 2025-06-01 23:55:58 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:55:58.970804 | orchestrator | 2025-06-01 23:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:02.019187 | orchestrator | 2025-06-01 23:56:02 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:02.020506 | orchestrator | 2025-06-01 23:56:02 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:02.022069 | orchestrator | 2025-06-01 23:56:02 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:02.022100 | orchestrator | 2025-06-01 23:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:05.065253 | orchestrator | 2025-06-01 23:56:05 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:05.067418 | orchestrator | 2025-06-01 23:56:05 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:05.067456 | orchestrator | 2025-06-01 23:56:05 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:05.067468 | orchestrator | 2025-06-01 23:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:08.104030 | orchestrator | 2025-06-01 23:56:08 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:08.106611 | orchestrator | 2025-06-01 23:56:08 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:08.109548 | orchestrator | 2025-06-01 23:56:08 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:08.109628 | orchestrator | 2025-06-01 23:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:11.162443 | orchestrator | 2025-06-01 23:56:11 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:11.164241 | orchestrator | 2025-06-01 23:56:11 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:11.167400 | orchestrator | 2025-06-01 23:56:11 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:11.167731 | orchestrator | 2025-06-01 23:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:14.227253 | orchestrator | 2025-06-01 23:56:14 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:14.227375 | orchestrator | 2025-06-01 23:56:14 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:14.229478 | orchestrator | 2025-06-01 23:56:14 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:14.229516 | orchestrator | 2025-06-01 23:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:17.277046 | orchestrator | 2025-06-01 23:56:17 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:17.278367 | orchestrator | 2025-06-01 23:56:17 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:17.280413 | orchestrator | 2025-06-01 23:56:17 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:17.280513 | orchestrator | 2025-06-01 23:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:20.316972 | orchestrator | 2025-06-01 23:56:20 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:20.319248 | orchestrator | 2025-06-01 23:56:20 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:20.322260 | orchestrator | 2025-06-01 23:56:20 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:20.322318 | orchestrator | 2025-06-01 23:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:23.361004 | orchestrator | 2025-06-01 23:56:23 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:23.362608 | orchestrator | 2025-06-01 23:56:23 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:23.365036 | orchestrator | 2025-06-01 23:56:23 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:23.365071 | orchestrator | 2025-06-01 23:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:26.405730 | orchestrator | 2025-06-01 23:56:26 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:26.407095 | orchestrator | 2025-06-01 23:56:26 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:26.408846 | orchestrator | 2025-06-01 23:56:26 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:26.408899 | orchestrator | 2025-06-01 23:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:29.465482 | orchestrator | 2025-06-01 23:56:29 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:29.466530 | orchestrator | 2025-06-01 23:56:29 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:29.467839 | orchestrator | 2025-06-01 23:56:29 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state STARTED 2025-06-01 23:56:29.467921 | orchestrator | 2025-06-01 23:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:32.523616 | orchestrator | 2025-06-01 23:56:32 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:32.523727 | orchestrator | 2025-06-01 23:56:32 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:32.524839 | orchestrator | 2025-06-01 23:56:32 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:32.527820 | orchestrator | 2025-06-01 23:56:32 | INFO  | Task 04713950-2b9d-457e-861b-6297b8a6697c is in state SUCCESS 2025-06-01 23:56:32.530793 | orchestrator | 2025-06-01 23:56:32.530901 | orchestrator | 2025-06-01 23:56:32.530919 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-01 23:56:32.530931 | orchestrator | 2025-06-01 23:56:32.530943 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-01 23:56:32.530954 | orchestrator | Sunday 01 June 2025 23:54:24 +0000 (0:00:00.636) 0:00:00.636 *********** 2025-06-01 23:56:32.530966 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:56:32.530978 | orchestrator | 2025-06-01 23:56:32.530990 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-01 23:56:32.531001 | orchestrator | Sunday 01 June 2025 23:54:25 +0000 (0:00:00.624) 0:00:01.260 *********** 2025-06-01 23:56:32.531012 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.531025 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.531036 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.531046 | orchestrator | 2025-06-01 23:56:32.531057 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-01 23:56:32.531068 | orchestrator | Sunday 01 June 2025 23:54:25 +0000 (0:00:00.787) 0:00:02.048 *********** 2025-06-01 23:56:32.531079 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.531090 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.531101 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.531133 | orchestrator | 2025-06-01 23:56:32.531158 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-01 23:56:32.531170 | orchestrator | Sunday 01 June 2025 23:54:26 +0000 (0:00:00.291) 0:00:02.340 *********** 2025-06-01 23:56:32.531180 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.531226 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.531245 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.531462 | orchestrator | 2025-06-01 23:56:32.531474 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-01 23:56:32.531485 | orchestrator | Sunday 01 June 2025 23:54:26 +0000 (0:00:00.761) 0:00:03.102 *********** 2025-06-01 23:56:32.531496 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.531507 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.531517 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.531528 | orchestrator | 2025-06-01 23:56:32.531539 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-01 23:56:32.531550 | orchestrator | Sunday 01 June 2025 23:54:27 +0000 (0:00:00.327) 0:00:03.429 *********** 2025-06-01 23:56:32.531560 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.531571 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.531582 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.532019 | orchestrator | 2025-06-01 23:56:32.532036 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-01 23:56:32.532047 | orchestrator | Sunday 01 June 2025 23:54:27 +0000 (0:00:00.308) 0:00:03.737 *********** 2025-06-01 23:56:32.532059 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.532070 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.532081 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.532091 | orchestrator | 2025-06-01 23:56:32.532103 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-01 23:56:32.532113 | orchestrator | Sunday 01 June 2025 23:54:27 +0000 (0:00:00.311) 0:00:04.048 *********** 2025-06-01 23:56:32.532124 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.532136 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.532146 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.532172 | orchestrator | 2025-06-01 23:56:32.532184 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-01 23:56:32.532195 | orchestrator | Sunday 01 June 2025 23:54:28 +0000 (0:00:00.610) 0:00:04.659 *********** 2025-06-01 23:56:32.532207 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.532218 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.532229 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.532240 | orchestrator | 2025-06-01 23:56:32.532253 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-01 23:56:32.532272 | orchestrator | Sunday 01 June 2025 23:54:28 +0000 (0:00:00.311) 0:00:04.970 *********** 2025-06-01 23:56:32.532290 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 23:56:32.532308 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:56:32.532326 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:56:32.532345 | orchestrator | 2025-06-01 23:56:32.532364 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-01 23:56:32.532382 | orchestrator | Sunday 01 June 2025 23:54:29 +0000 (0:00:00.607) 0:00:05.578 *********** 2025-06-01 23:56:32.532400 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.532412 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.532423 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.532433 | orchestrator | 2025-06-01 23:56:32.532444 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-01 23:56:32.532455 | orchestrator | Sunday 01 June 2025 23:54:29 +0000 (0:00:00.411) 0:00:05.990 *********** 2025-06-01 23:56:32.532470 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 23:56:32.532488 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:56:32.532504 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:56:32.532523 | orchestrator | 2025-06-01 23:56:32.532542 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-01 23:56:32.532577 | orchestrator | Sunday 01 June 2025 23:54:31 +0000 (0:00:02.107) 0:00:08.097 *********** 2025-06-01 23:56:32.532589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 23:56:32.532600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 23:56:32.532610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 23:56:32.532621 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.532633 | orchestrator | 2025-06-01 23:56:32.532646 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-01 23:56:32.532673 | orchestrator | Sunday 01 June 2025 23:54:32 +0000 (0:00:00.402) 0:00:08.500 *********** 2025-06-01 23:56:32.532687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.532703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.532717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.532730 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.532743 | orchestrator | 2025-06-01 23:56:32.532756 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-01 23:56:32.532768 | orchestrator | Sunday 01 June 2025 23:54:33 +0000 (0:00:00.763) 0:00:09.263 *********** 2025-06-01 23:56:32.532783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.532799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.532819 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.532832 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.532845 | orchestrator | 2025-06-01 23:56:32.532889 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-01 23:56:32.532902 | orchestrator | Sunday 01 June 2025 23:54:33 +0000 (0:00:00.147) 0:00:09.411 *********** 2025-06-01 23:56:32.532917 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b6c7c102c210', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-01 23:54:30.505550', 'end': '2025-06-01 23:54:30.551190', 'delta': '0:00:00.045640', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b6c7c102c210'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-01 23:56:32.532941 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '622b85055fc1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-01 23:54:31.247330', 'end': '2025-06-01 23:54:31.288985', 'delta': '0:00:00.041655', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['622b85055fc1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-01 23:56:32.532966 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e2be32b22fac', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-01 23:54:31.760286', 'end': '2025-06-01 23:54:31.794992', 'delta': '0:00:00.034706', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e2be32b22fac'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-01 23:56:32.532978 | orchestrator | 2025-06-01 23:56:32.532989 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-01 23:56:32.533000 | orchestrator | Sunday 01 June 2025 23:54:33 +0000 (0:00:00.400) 0:00:09.812 *********** 2025-06-01 23:56:32.533010 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.533021 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.533032 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.533058 | orchestrator | 2025-06-01 23:56:32.533070 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-01 23:56:32.533091 | orchestrator | Sunday 01 June 2025 23:54:34 +0000 (0:00:00.436) 0:00:10.249 *********** 2025-06-01 23:56:32.533102 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-01 23:56:32.533113 | orchestrator | 2025-06-01 23:56:32.533124 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-01 23:56:32.533134 | orchestrator | Sunday 01 June 2025 23:54:35 +0000 (0:00:01.593) 0:00:11.843 *********** 2025-06-01 23:56:32.533145 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533156 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533167 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533178 | orchestrator | 2025-06-01 23:56:32.533189 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-01 23:56:32.533199 | orchestrator | Sunday 01 June 2025 23:54:35 +0000 (0:00:00.301) 0:00:12.144 *********** 2025-06-01 23:56:32.533210 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533221 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533232 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533243 | orchestrator | 2025-06-01 23:56:32.533253 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 23:56:32.533264 | orchestrator | Sunday 01 June 2025 23:54:36 +0000 (0:00:00.388) 0:00:12.533 *********** 2025-06-01 23:56:32.533275 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533286 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533297 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533307 | orchestrator | 2025-06-01 23:56:32.533324 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-01 23:56:32.533343 | orchestrator | Sunday 01 June 2025 23:54:36 +0000 (0:00:00.456) 0:00:12.989 *********** 2025-06-01 23:56:32.533373 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.533395 | orchestrator | 2025-06-01 23:56:32.533424 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-01 23:56:32.533437 | orchestrator | Sunday 01 June 2025 23:54:36 +0000 (0:00:00.129) 0:00:13.119 *********** 2025-06-01 23:56:32.533448 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533458 | orchestrator | 2025-06-01 23:56:32.533469 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-01 23:56:32.533480 | orchestrator | Sunday 01 June 2025 23:54:37 +0000 (0:00:00.220) 0:00:13.339 *********** 2025-06-01 23:56:32.533490 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533501 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533511 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533522 | orchestrator | 2025-06-01 23:56:32.533532 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-01 23:56:32.533543 | orchestrator | Sunday 01 June 2025 23:54:37 +0000 (0:00:00.292) 0:00:13.631 *********** 2025-06-01 23:56:32.533554 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533564 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533575 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533585 | orchestrator | 2025-06-01 23:56:32.533596 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-01 23:56:32.533607 | orchestrator | Sunday 01 June 2025 23:54:37 +0000 (0:00:00.308) 0:00:13.940 *********** 2025-06-01 23:56:32.533617 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533628 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533638 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533649 | orchestrator | 2025-06-01 23:56:32.533660 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-01 23:56:32.533671 | orchestrator | Sunday 01 June 2025 23:54:38 +0000 (0:00:00.487) 0:00:14.428 *********** 2025-06-01 23:56:32.533681 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533692 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533703 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533713 | orchestrator | 2025-06-01 23:56:32.533724 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-01 23:56:32.533735 | orchestrator | Sunday 01 June 2025 23:54:38 +0000 (0:00:00.302) 0:00:14.731 *********** 2025-06-01 23:56:32.533745 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533756 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533766 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533777 | orchestrator | 2025-06-01 23:56:32.533788 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-01 23:56:32.533798 | orchestrator | Sunday 01 June 2025 23:54:38 +0000 (0:00:00.340) 0:00:15.072 *********** 2025-06-01 23:56:32.533809 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533820 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533830 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533841 | orchestrator | 2025-06-01 23:56:32.533909 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-01 23:56:32.533933 | orchestrator | Sunday 01 June 2025 23:54:39 +0000 (0:00:00.313) 0:00:15.386 *********** 2025-06-01 23:56:32.533944 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.533955 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.533966 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.533977 | orchestrator | 2025-06-01 23:56:32.533987 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-01 23:56:32.533998 | orchestrator | Sunday 01 June 2025 23:54:39 +0000 (0:00:00.518) 0:00:15.904 *********** 2025-06-01 23:56:32.534011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c', 'dm-uuid-LVM-lY000ij8spVdbwMPsuHwxRm6N8rXo1xEKM0the2kvnHN2HXreC8YTiSxCd2xa1F9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d', 'dm-uuid-LVM-GhKV0mAmfVz3OWnt6h44eSN08J1eg2uHr8WuQrYIYGQqaeCQUZTMj4etyxA5NS1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p9a3OW-9Nqb-KczT-JCE0-BEka-77tW-mDkqM7', 'scsi-0QEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4', 'scsi-SQEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7d1Hl7-0zv3-YwmZ-7i1K-4hHB-xpVU-3DfYgL', 'scsi-0QEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6', 'scsi-SQEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7', 'scsi-SQEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1', 'dm-uuid-LVM-8rltPS1zinphry04VtbqOAXIZky2BSieqrdJexquUh4cweVg01NJJXJtYXbGecAM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af', 'dm-uuid-LVM-C3VzGVCzSeDNjw2tbyMc3DGsFYajVmUTz1RoWot2AV2e2E7uQ1eGrtoZwznrehJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534479 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.534512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bcRKUj-EFOQ-SLMk-JCXp-GRU2-2XWe-xGmPQ9', 'scsi-0QEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb', 'scsi-SQEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z4dNgx-iCbZ-zdgb-84C2-wWgr-4Inq-N7foRu', 'scsi-0QEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680', 'scsi-SQEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d', 'scsi-SQEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534578 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.534588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e', 'dm-uuid-LVM-TVIPiUfVpbD6GvoWrA4o5pJFszWB8LnPPqNC08Jc6tZWE4KqZb40k4MizNg5MmR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb', 'dm-uuid-LVM-rghCdHrgDliYpWxe2d0NNWSEjMhDpqbmj0brKiNeuCxFrEVtrHkALbS5a1RhrcWe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-01 23:56:32.534791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uqqcBj-NimS-QMCO-aRaJ-qXOj-GzzO-V2iKe2', 'scsi-0QEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322', 'scsi-SQEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbm1bs-L8pD-XLHD-828b-UG3H-3g49-J97dJp', 'scsi-0QEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5', 'scsi-SQEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f', 'scsi-SQEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-01 23:56:32.534906 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.534916 | orchestrator | 2025-06-01 23:56:32.534926 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-01 23:56:32.534936 | orchestrator | Sunday 01 June 2025 23:54:40 +0000 (0:00:00.593) 0:00:16.498 *********** 2025-06-01 23:56:32.534947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c', 'dm-uuid-LVM-lY000ij8spVdbwMPsuHwxRm6N8rXo1xEKM0the2kvnHN2HXreC8YTiSxCd2xa1F9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.534958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d', 'dm-uuid-LVM-GhKV0mAmfVz3OWnt6h44eSN08J1eg2uHr8WuQrYIYGQqaeCQUZTMj4etyxA5NS1i'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.534972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.534983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.534999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535036 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535060 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1', 'dm-uuid-LVM-8rltPS1zinphry04VtbqOAXIZky2BSieqrdJexquUh4cweVg01NJJXJtYXbGecAM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af', 'dm-uuid-LVM-C3VzGVCzSeDNjw2tbyMc3DGsFYajVmUTz1RoWot2AV2e2E7uQ1eGrtoZwznrehJx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535121 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16', 'scsi-SQEMU_QEMU_HARDDISK_9eb8197d-3dc8-4459-9c52-34779715aaef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--008ba5ef--cc9a--56f9--b375--6638a5870e2c-osd--block--008ba5ef--cc9a--56f9--b375--6638a5870e2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-p9a3OW-9Nqb-KczT-JCE0-BEka-77tW-mDkqM7', 'scsi-0QEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4', 'scsi-SQEMU_QEMU_HARDDISK_e23ad96a-b832-416d-911f-1711f12500c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535177 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--21b07b94--4d11--536c--9a45--349f1f6df87d-osd--block--21b07b94--4d11--536c--9a45--349f1f6df87d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-7d1Hl7-0zv3-YwmZ-7i1K-4hHB-xpVU-3DfYgL', 'scsi-0QEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6', 'scsi-SQEMU_QEMU_HARDDISK_768ce349-132d-4c04-96b3-035bfe10ebf6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7', 'scsi-SQEMU_QEMU_HARDDISK_f2cefa5c-3d1d-4277-b121-6d9adea683a7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535243 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.535253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16', 'scsi-SQEMU_QEMU_HARDDISK_b16f75d9-1f35-401a-92f6-79076ad325ac-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e43a5796--5555--5d7b--8188--8712d414b3d1-osd--block--e43a5796--5555--5d7b--8188--8712d414b3d1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bcRKUj-EFOQ-SLMk-JCXp-GRU2-2XWe-xGmPQ9', 'scsi-0QEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb', 'scsi-SQEMU_QEMU_HARDDISK_9f9b614f-8ac1-443f-a8a9-e3e743fec9fb'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535328 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af-osd--block--3aa9cf12--e8a4--5f15--a0dc--00261f7d28af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-z4dNgx-iCbZ-zdgb-84C2-wWgr-4Inq-N7foRu', 'scsi-0QEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680', 'scsi-SQEMU_QEMU_HARDDISK_39b25e00-2509-407e-b71e-c183a8ac9680'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d', 'scsi-SQEMU_QEMU_HARDDISK_389c9d93-9871-4a47-9a60-ac279d750f3d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535673 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e', 'dm-uuid-LVM-TVIPiUfVpbD6GvoWrA4o5pJFszWB8LnPPqNC08Jc6tZWE4KqZb40k4MizNg5MmR2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb', 'dm-uuid-LVM-rghCdHrgDliYpWxe2d0NNWSEjMhDpqbmj0brKiNeuCxFrEVtrHkALbS5a1RhrcWe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535717 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.535734 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535850 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee682ed7-cf61-4b4b-b7dc-0c09473318ce-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--94e6c78b--35f7--5cb8--865b--5befb7b6694e-osd--block--94e6c78b--35f7--5cb8--865b--5befb7b6694e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-uqqcBj-NimS-QMCO-aRaJ-qXOj-GzzO-V2iKe2', 'scsi-0QEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322', 'scsi-SQEMU_QEMU_HARDDISK_b890f567-0ad2-40b6-bedf-e62e59fc0322'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.535987 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0de39833--f6ff--5bf1--9ca3--735e32822edb-osd--block--0de39833--f6ff--5bf1--9ca3--735e32822edb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mbm1bs-L8pD-XLHD-828b-UG3H-3g49-J97dJp', 'scsi-0QEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5', 'scsi-SQEMU_QEMU_HARDDISK_9eb75d32-600b-4da1-bdd4-064d087d06d5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.536005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f', 'scsi-SQEMU_QEMU_HARDDISK_a8e8789d-2f8d-4752-a1c5-15f6e96bd27f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.536023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-01-22-06-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-01 23:56:32.536034 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536044 | orchestrator | 2025-06-01 23:56:32.536053 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-01 23:56:32.536063 | orchestrator | Sunday 01 June 2025 23:54:40 +0000 (0:00:00.607) 0:00:17.105 *********** 2025-06-01 23:56:32.536073 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.536084 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.536093 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.536102 | orchestrator | 2025-06-01 23:56:32.536112 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-01 23:56:32.536122 | orchestrator | Sunday 01 June 2025 23:54:41 +0000 (0:00:00.638) 0:00:17.744 *********** 2025-06-01 23:56:32.536131 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.536140 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.536150 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.536159 | orchestrator | 2025-06-01 23:56:32.536169 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 23:56:32.536179 | orchestrator | Sunday 01 June 2025 23:54:42 +0000 (0:00:00.472) 0:00:18.216 *********** 2025-06-01 23:56:32.536188 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.536197 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.536207 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.536216 | orchestrator | 2025-06-01 23:56:32.536226 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 23:56:32.536235 | orchestrator | Sunday 01 June 2025 23:54:42 +0000 (0:00:00.613) 0:00:18.830 *********** 2025-06-01 23:56:32.536251 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536261 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536271 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536283 | orchestrator | 2025-06-01 23:56:32.536293 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-01 23:56:32.536304 | orchestrator | Sunday 01 June 2025 23:54:42 +0000 (0:00:00.297) 0:00:19.127 *********** 2025-06-01 23:56:32.536315 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536326 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536337 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536347 | orchestrator | 2025-06-01 23:56:32.536359 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-01 23:56:32.536370 | orchestrator | Sunday 01 June 2025 23:54:43 +0000 (0:00:00.416) 0:00:19.544 *********** 2025-06-01 23:56:32.536381 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536392 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536402 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536413 | orchestrator | 2025-06-01 23:56:32.536422 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-01 23:56:32.536431 | orchestrator | Sunday 01 June 2025 23:54:43 +0000 (0:00:00.527) 0:00:20.071 *********** 2025-06-01 23:56:32.536444 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-01 23:56:32.536463 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-01 23:56:32.536477 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-01 23:56:32.536490 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-01 23:56:32.536504 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-01 23:56:32.536518 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-01 23:56:32.536532 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-01 23:56:32.536546 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-01 23:56:32.536560 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-01 23:56:32.536569 | orchestrator | 2025-06-01 23:56:32.536578 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-01 23:56:32.536587 | orchestrator | Sunday 01 June 2025 23:54:44 +0000 (0:00:00.835) 0:00:20.907 *********** 2025-06-01 23:56:32.536596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-01 23:56:32.536604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-01 23:56:32.536613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-01 23:56:32.536622 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536631 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-01 23:56:32.536640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-01 23:56:32.536649 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-01 23:56:32.536658 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536667 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-01 23:56:32.536674 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-01 23:56:32.536682 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-01 23:56:32.536690 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536697 | orchestrator | 2025-06-01 23:56:32.536705 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-01 23:56:32.536713 | orchestrator | Sunday 01 June 2025 23:54:45 +0000 (0:00:00.355) 0:00:21.262 *********** 2025-06-01 23:56:32.536721 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:56:32.536729 | orchestrator | 2025-06-01 23:56:32.536737 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-01 23:56:32.536751 | orchestrator | Sunday 01 June 2025 23:54:45 +0000 (0:00:00.706) 0:00:21.968 *********** 2025-06-01 23:56:32.536759 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536767 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536775 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536783 | orchestrator | 2025-06-01 23:56:32.536795 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-01 23:56:32.536803 | orchestrator | Sunday 01 June 2025 23:54:46 +0000 (0:00:00.339) 0:00:22.308 *********** 2025-06-01 23:56:32.536811 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536819 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536827 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536834 | orchestrator | 2025-06-01 23:56:32.536842 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-01 23:56:32.536850 | orchestrator | Sunday 01 June 2025 23:54:46 +0000 (0:00:00.324) 0:00:22.633 *********** 2025-06-01 23:56:32.536880 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.536888 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.536896 | orchestrator | skipping: [testbed-node-5] 2025-06-01 23:56:32.536904 | orchestrator | 2025-06-01 23:56:32.536911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-01 23:56:32.536919 | orchestrator | Sunday 01 June 2025 23:54:46 +0000 (0:00:00.310) 0:00:22.943 *********** 2025-06-01 23:56:32.536927 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.536935 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.536942 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.536950 | orchestrator | 2025-06-01 23:56:32.536958 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-01 23:56:32.536966 | orchestrator | Sunday 01 June 2025 23:54:47 +0000 (0:00:00.628) 0:00:23.571 *********** 2025-06-01 23:56:32.536974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:56:32.536981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:56:32.536989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:56:32.536997 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.537005 | orchestrator | 2025-06-01 23:56:32.537013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-01 23:56:32.537020 | orchestrator | Sunday 01 June 2025 23:54:47 +0000 (0:00:00.410) 0:00:23.982 *********** 2025-06-01 23:56:32.537028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:56:32.537036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:56:32.537043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:56:32.537051 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.537059 | orchestrator | 2025-06-01 23:56:32.537067 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-01 23:56:32.537074 | orchestrator | Sunday 01 June 2025 23:54:48 +0000 (0:00:00.369) 0:00:24.351 *********** 2025-06-01 23:56:32.537082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-01 23:56:32.537090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-01 23:56:32.537098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-01 23:56:32.537105 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.537113 | orchestrator | 2025-06-01 23:56:32.537121 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-01 23:56:32.537128 | orchestrator | Sunday 01 June 2025 23:54:48 +0000 (0:00:00.373) 0:00:24.724 *********** 2025-06-01 23:56:32.537136 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:56:32.537150 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:56:32.537157 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:56:32.537165 | orchestrator | 2025-06-01 23:56:32.537173 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-01 23:56:32.537181 | orchestrator | Sunday 01 June 2025 23:54:48 +0000 (0:00:00.328) 0:00:25.052 *********** 2025-06-01 23:56:32.537194 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-01 23:56:32.537202 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-01 23:56:32.537210 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-01 23:56:32.537218 | orchestrator | 2025-06-01 23:56:32.537225 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-01 23:56:32.537233 | orchestrator | Sunday 01 June 2025 23:54:49 +0000 (0:00:00.487) 0:00:25.540 *********** 2025-06-01 23:56:32.537241 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 23:56:32.537249 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:56:32.537257 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:56:32.537264 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-01 23:56:32.537272 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 23:56:32.537280 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 23:56:32.537288 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 23:56:32.537295 | orchestrator | 2025-06-01 23:56:32.537303 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-01 23:56:32.537311 | orchestrator | Sunday 01 June 2025 23:54:50 +0000 (0:00:00.992) 0:00:26.533 *********** 2025-06-01 23:56:32.537318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-01 23:56:32.537326 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-01 23:56:32.537334 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-01 23:56:32.537341 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-01 23:56:32.537349 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-01 23:56:32.537357 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-01 23:56:32.537365 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-01 23:56:32.537372 | orchestrator | 2025-06-01 23:56:32.537384 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-01 23:56:32.537392 | orchestrator | Sunday 01 June 2025 23:54:52 +0000 (0:00:01.876) 0:00:28.409 *********** 2025-06-01 23:56:32.537400 | orchestrator | skipping: [testbed-node-3] 2025-06-01 23:56:32.537408 | orchestrator | skipping: [testbed-node-4] 2025-06-01 23:56:32.537416 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-01 23:56:32.537423 | orchestrator | 2025-06-01 23:56:32.537431 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-01 23:56:32.537439 | orchestrator | Sunday 01 June 2025 23:54:52 +0000 (0:00:00.367) 0:00:28.777 *********** 2025-06-01 23:56:32.537447 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:56:32.537456 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:56:32.537464 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:56:32.537472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:56:32.537490 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-01 23:56:32.537505 | orchestrator | 2025-06-01 23:56:32.537519 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-01 23:56:32.537534 | orchestrator | Sunday 01 June 2025 23:55:37 +0000 (0:00:44.882) 0:01:13.659 *********** 2025-06-01 23:56:32.537543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537555 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537563 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537571 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537578 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537586 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537593 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-01 23:56:32.537601 | orchestrator | 2025-06-01 23:56:32.537609 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-01 23:56:32.537616 | orchestrator | Sunday 01 June 2025 23:56:00 +0000 (0:00:23.367) 0:01:37.027 *********** 2025-06-01 23:56:32.537624 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537631 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537639 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537646 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537654 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537662 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537669 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-01 23:56:32.537677 | orchestrator | 2025-06-01 23:56:32.537684 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-01 23:56:32.537692 | orchestrator | Sunday 01 June 2025 23:56:12 +0000 (0:00:11.913) 0:01:48.941 *********** 2025-06-01 23:56:32.537700 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537707 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:56:32.537715 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:56:32.537723 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537731 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:56:32.537738 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:56:32.537751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537759 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:56:32.537766 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:56:32.537774 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537782 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:56:32.537796 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:56:32.537803 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537811 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:56:32.537819 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:56:32.537827 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-01 23:56:32.537834 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-01 23:56:32.537842 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-01 23:56:32.537850 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-01 23:56:32.537878 | orchestrator | 2025-06-01 23:56:32.537886 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:56:32.537894 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-01 23:56:32.537903 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-01 23:56:32.537911 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-01 23:56:32.537919 | orchestrator | 2025-06-01 23:56:32.537926 | orchestrator | 2025-06-01 23:56:32.537934 | orchestrator | 2025-06-01 23:56:32.537942 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:56:32.537949 | orchestrator | Sunday 01 June 2025 23:56:29 +0000 (0:00:17.136) 0:02:06.077 *********** 2025-06-01 23:56:32.537957 | orchestrator | =============================================================================== 2025-06-01 23:56:32.537965 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.88s 2025-06-01 23:56:32.537973 | orchestrator | generate keys ---------------------------------------------------------- 23.37s 2025-06-01 23:56:32.537980 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.14s 2025-06-01 23:56:32.537988 | orchestrator | get keys from monitors ------------------------------------------------- 11.91s 2025-06-01 23:56:32.538003 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2025-06-01 23:56:32.538011 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.88s 2025-06-01 23:56:32.538048 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.59s 2025-06-01 23:56:32.538056 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2025-06-01 23:56:32.538064 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-06-01 23:56:32.538072 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.79s 2025-06-01 23:56:32.538079 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.76s 2025-06-01 23:56:32.538087 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.76s 2025-06-01 23:56:32.538095 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2025-06-01 23:56:32.538103 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-06-01 23:56:32.538110 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2025-06-01 23:56:32.538118 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2025-06-01 23:56:32.538126 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.61s 2025-06-01 23:56:32.538134 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.61s 2025-06-01 23:56:32.538141 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-06-01 23:56:32.538155 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2025-06-01 23:56:32.538163 | orchestrator | 2025-06-01 23:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:35.574933 | orchestrator | 2025-06-01 23:56:35 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:35.576087 | orchestrator | 2025-06-01 23:56:35 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:35.578164 | orchestrator | 2025-06-01 23:56:35 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:35.578232 | orchestrator | 2025-06-01 23:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:38.629556 | orchestrator | 2025-06-01 23:56:38 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:38.631035 | orchestrator | 2025-06-01 23:56:38 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:38.632616 | orchestrator | 2025-06-01 23:56:38 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:38.632662 | orchestrator | 2025-06-01 23:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:41.679656 | orchestrator | 2025-06-01 23:56:41 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:41.681336 | orchestrator | 2025-06-01 23:56:41 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:41.683094 | orchestrator | 2025-06-01 23:56:41 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:41.683130 | orchestrator | 2025-06-01 23:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:44.737635 | orchestrator | 2025-06-01 23:56:44 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:44.737778 | orchestrator | 2025-06-01 23:56:44 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:44.739128 | orchestrator | 2025-06-01 23:56:44 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:44.739338 | orchestrator | 2025-06-01 23:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:47.796022 | orchestrator | 2025-06-01 23:56:47 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:47.799345 | orchestrator | 2025-06-01 23:56:47 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:47.805028 | orchestrator | 2025-06-01 23:56:47 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:47.805083 | orchestrator | 2025-06-01 23:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:50.863736 | orchestrator | 2025-06-01 23:56:50 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:50.866389 | orchestrator | 2025-06-01 23:56:50 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:50.868316 | orchestrator | 2025-06-01 23:56:50 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:50.868493 | orchestrator | 2025-06-01 23:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:53.918640 | orchestrator | 2025-06-01 23:56:53 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:53.918747 | orchestrator | 2025-06-01 23:56:53 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:53.918762 | orchestrator | 2025-06-01 23:56:53 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:53.918797 | orchestrator | 2025-06-01 23:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:56:56.965213 | orchestrator | 2025-06-01 23:56:56 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:56:56.969494 | orchestrator | 2025-06-01 23:56:56 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:56:56.971606 | orchestrator | 2025-06-01 23:56:56 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:56:56.971637 | orchestrator | 2025-06-01 23:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:00.022988 | orchestrator | 2025-06-01 23:57:00 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:00.025390 | orchestrator | 2025-06-01 23:57:00 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state STARTED 2025-06-01 23:57:00.029297 | orchestrator | 2025-06-01 23:57:00 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:00.029344 | orchestrator | 2025-06-01 23:57:00 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:03.089247 | orchestrator | 2025-06-01 23:57:03 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:03.090138 | orchestrator | 2025-06-01 23:57:03 | INFO  | Task a684174e-de06-4f8f-8fcc-5366b9547e64 is in state SUCCESS 2025-06-01 23:57:03.092685 | orchestrator | 2025-06-01 23:57:03 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:03.094549 | orchestrator | 2025-06-01 23:57:03 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:03.094598 | orchestrator | 2025-06-01 23:57:03 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:06.147926 | orchestrator | 2025-06-01 23:57:06 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:06.149234 | orchestrator | 2025-06-01 23:57:06 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:06.151006 | orchestrator | 2025-06-01 23:57:06 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:06.151618 | orchestrator | 2025-06-01 23:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:09.201343 | orchestrator | 2025-06-01 23:57:09 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:09.202256 | orchestrator | 2025-06-01 23:57:09 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:09.204126 | orchestrator | 2025-06-01 23:57:09 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:09.204607 | orchestrator | 2025-06-01 23:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:12.255967 | orchestrator | 2025-06-01 23:57:12 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:12.257282 | orchestrator | 2025-06-01 23:57:12 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:12.258589 | orchestrator | 2025-06-01 23:57:12 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:12.258625 | orchestrator | 2025-06-01 23:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:15.298451 | orchestrator | 2025-06-01 23:57:15 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:15.298548 | orchestrator | 2025-06-01 23:57:15 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:15.302160 | orchestrator | 2025-06-01 23:57:15 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:15.302270 | orchestrator | 2025-06-01 23:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:18.354529 | orchestrator | 2025-06-01 23:57:18 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:18.358181 | orchestrator | 2025-06-01 23:57:18 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:18.360233 | orchestrator | 2025-06-01 23:57:18 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:18.360405 | orchestrator | 2025-06-01 23:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:21.408720 | orchestrator | 2025-06-01 23:57:21 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:21.408879 | orchestrator | 2025-06-01 23:57:21 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:21.409910 | orchestrator | 2025-06-01 23:57:21 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:21.409950 | orchestrator | 2025-06-01 23:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:24.458362 | orchestrator | 2025-06-01 23:57:24 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:24.459356 | orchestrator | 2025-06-01 23:57:24 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:24.459662 | orchestrator | 2025-06-01 23:57:24 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:24.459693 | orchestrator | 2025-06-01 23:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:27.500454 | orchestrator | 2025-06-01 23:57:27 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:27.500917 | orchestrator | 2025-06-01 23:57:27 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:27.502212 | orchestrator | 2025-06-01 23:57:27 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:27.502240 | orchestrator | 2025-06-01 23:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:30.555738 | orchestrator | 2025-06-01 23:57:30 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state STARTED 2025-06-01 23:57:30.558358 | orchestrator | 2025-06-01 23:57:30 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:30.560749 | orchestrator | 2025-06-01 23:57:30 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:30.561935 | orchestrator | 2025-06-01 23:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:33.613591 | orchestrator | 2025-06-01 23:57:33 | INFO  | Task e4cbb99e-200a-41d9-903b-acafd41826ad is in state SUCCESS 2025-06-01 23:57:33.614675 | orchestrator | 2025-06-01 23:57:33.614720 | orchestrator | 2025-06-01 23:57:33.614734 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-01 23:57:33.614746 | orchestrator | 2025-06-01 23:57:33.614757 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-01 23:57:33.614769 | orchestrator | Sunday 01 June 2025 23:56:36 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-06-01 23:57:33.614780 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-01 23:57:33.614793 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.614804 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.614815 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:57:33.615257 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615280 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-01 23:57:33.615292 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-01 23:57:33.615303 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-01 23:57:33.615313 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-01 23:57:33.615324 | orchestrator | 2025-06-01 23:57:33.615335 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-01 23:57:33.615347 | orchestrator | Sunday 01 June 2025 23:56:40 +0000 (0:00:04.246) 0:00:04.413 *********** 2025-06-01 23:57:33.615359 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-01 23:57:33.615370 | orchestrator | 2025-06-01 23:57:33.615381 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-01 23:57:33.615392 | orchestrator | Sunday 01 June 2025 23:56:41 +0000 (0:00:00.972) 0:00:05.385 *********** 2025-06-01 23:57:33.615403 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-01 23:57:33.615414 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615425 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615436 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:57:33.615447 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615472 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-01 23:57:33.615484 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-01 23:57:33.615494 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-01 23:57:33.615505 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-01 23:57:33.615516 | orchestrator | 2025-06-01 23:57:33.615527 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-01 23:57:33.615538 | orchestrator | Sunday 01 June 2025 23:56:54 +0000 (0:00:13.286) 0:00:18.672 *********** 2025-06-01 23:57:33.615549 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-01 23:57:33.615561 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615580 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615599 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-01 23:57:33.615618 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-01 23:57:33.615637 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-01 23:57:33.615655 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-01 23:57:33.615675 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-01 23:57:33.615694 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-01 23:57:33.615715 | orchestrator | 2025-06-01 23:57:33.615734 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:57:33.615749 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:57:33.615761 | orchestrator | 2025-06-01 23:57:33.615772 | orchestrator | 2025-06-01 23:57:33.615783 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:57:33.615794 | orchestrator | Sunday 01 June 2025 23:57:01 +0000 (0:00:06.940) 0:00:25.612 *********** 2025-06-01 23:57:33.615817 | orchestrator | =============================================================================== 2025-06-01 23:57:33.615882 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.29s 2025-06-01 23:57:33.615898 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.94s 2025-06-01 23:57:33.615909 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.25s 2025-06-01 23:57:33.615921 | orchestrator | Create share directory -------------------------------------------------- 0.97s 2025-06-01 23:57:33.615933 | orchestrator | 2025-06-01 23:57:33.615945 | orchestrator | 2025-06-01 23:57:33.615958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:57:33.615970 | orchestrator | 2025-06-01 23:57:33.615996 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:57:33.616008 | orchestrator | Sunday 01 June 2025 23:55:41 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-01 23:57:33.616019 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.616030 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.616041 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.616052 | orchestrator | 2025-06-01 23:57:33.616072 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:57:33.616092 | orchestrator | Sunday 01 June 2025 23:55:42 +0000 (0:00:00.306) 0:00:00.564 *********** 2025-06-01 23:57:33.616111 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-01 23:57:33.616133 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-01 23:57:33.616152 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-01 23:57:33.616170 | orchestrator | 2025-06-01 23:57:33.616181 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-01 23:57:33.616192 | orchestrator | 2025-06-01 23:57:33.616203 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:57:33.616213 | orchestrator | Sunday 01 June 2025 23:55:42 +0000 (0:00:00.416) 0:00:00.981 *********** 2025-06-01 23:57:33.616224 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:57:33.616235 | orchestrator | 2025-06-01 23:57:33.616246 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-01 23:57:33.616355 | orchestrator | Sunday 01 June 2025 23:55:43 +0000 (0:00:00.482) 0:00:01.464 *********** 2025-06-01 23:57:33.616387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.616431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.616452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.616471 | orchestrator | 2025-06-01 23:57:33.616482 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-01 23:57:33.616493 | orchestrator | Sunday 01 June 2025 23:55:44 +0000 (0:00:01.135) 0:00:02.599 *********** 2025-06-01 23:57:33.616510 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.616530 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.616548 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.616566 | orchestrator | 2025-06-01 23:57:33.616583 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:57:33.616602 | orchestrator | Sunday 01 June 2025 23:55:44 +0000 (0:00:00.472) 0:00:03.072 *********** 2025-06-01 23:57:33.616621 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 23:57:33.616639 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 23:57:33.616667 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 23:57:33.616679 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 23:57:33.616690 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 23:57:33.616701 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 23:57:33.616711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-01 23:57:33.616730 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 23:57:33.616748 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 23:57:33.616766 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 23:57:33.616784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 23:57:33.616801 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 23:57:33.616818 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 23:57:33.616863 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 23:57:33.616882 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-01 23:57:33.616901 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 23:57:33.616919 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-01 23:57:33.616938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-01 23:57:33.616958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-01 23:57:33.616976 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-01 23:57:33.616994 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-01 23:57:33.617006 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-01 23:57:33.617028 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-01 23:57:33.617040 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-01 23:57:33.617055 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-01 23:57:33.617085 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-01 23:57:33.617104 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-01 23:57:33.617124 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-01 23:57:33.617144 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-01 23:57:33.617164 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-01 23:57:33.617183 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-01 23:57:33.617198 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-01 23:57:33.617210 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-01 23:57:33.617222 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-01 23:57:33.617235 | orchestrator | 2025-06-01 23:57:33.617248 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.617260 | orchestrator | Sunday 01 June 2025 23:55:45 +0000 (0:00:00.754) 0:00:03.826 *********** 2025-06-01 23:57:33.617273 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.617285 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.617297 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.617310 | orchestrator | 2025-06-01 23:57:33.617322 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.617335 | orchestrator | Sunday 01 June 2025 23:55:45 +0000 (0:00:00.300) 0:00:04.127 *********** 2025-06-01 23:57:33.617348 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.617360 | orchestrator | 2025-06-01 23:57:33.617371 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.617391 | orchestrator | Sunday 01 June 2025 23:55:45 +0000 (0:00:00.123) 0:00:04.251 *********** 2025-06-01 23:57:33.617402 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.617413 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.617424 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.617435 | orchestrator | 2025-06-01 23:57:33.617446 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.617457 | orchestrator | Sunday 01 June 2025 23:55:46 +0000 (0:00:00.465) 0:00:04.716 *********** 2025-06-01 23:57:33.617468 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.617479 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.617490 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.617501 | orchestrator | 2025-06-01 23:57:33.617512 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.617523 | orchestrator | Sunday 01 June 2025 23:55:46 +0000 (0:00:00.337) 0:00:05.054 *********** 2025-06-01 23:57:33.617533 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.617552 | orchestrator | 2025-06-01 23:57:33.617563 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.617574 | orchestrator | Sunday 01 June 2025 23:55:46 +0000 (0:00:00.130) 0:00:05.185 *********** 2025-06-01 23:57:33.617585 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.617596 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.617607 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.617618 | orchestrator | 2025-06-01 23:57:33.617629 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.617640 | orchestrator | Sunday 01 June 2025 23:55:47 +0000 (0:00:00.284) 0:00:05.470 *********** 2025-06-01 23:57:33.617651 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.617662 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.617672 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.617683 | orchestrator | 2025-06-01 23:57:33.617694 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.617705 | orchestrator | Sunday 01 June 2025 23:55:47 +0000 (0:00:00.295) 0:00:05.765 *********** 2025-06-01 23:57:33.617716 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.617726 | orchestrator | 2025-06-01 23:57:33.617737 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.617748 | orchestrator | Sunday 01 June 2025 23:55:47 +0000 (0:00:00.352) 0:00:06.117 *********** 2025-06-01 23:57:33.617759 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.617770 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.617781 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.617792 | orchestrator | 2025-06-01 23:57:33.617802 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.617813 | orchestrator | Sunday 01 June 2025 23:55:48 +0000 (0:00:00.290) 0:00:06.408 *********** 2025-06-01 23:57:33.617824 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.617990 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.618008 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.618081 | orchestrator | 2025-06-01 23:57:33.618093 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.618120 | orchestrator | Sunday 01 June 2025 23:55:48 +0000 (0:00:00.280) 0:00:06.688 *********** 2025-06-01 23:57:33.618130 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618140 | orchestrator | 2025-06-01 23:57:33.618149 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.618159 | orchestrator | Sunday 01 June 2025 23:55:48 +0000 (0:00:00.123) 0:00:06.812 *********** 2025-06-01 23:57:33.618168 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618178 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.618187 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.618197 | orchestrator | 2025-06-01 23:57:33.618206 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.618216 | orchestrator | Sunday 01 June 2025 23:55:48 +0000 (0:00:00.281) 0:00:07.094 *********** 2025-06-01 23:57:33.618225 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.618235 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.618244 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.618254 | orchestrator | 2025-06-01 23:57:33.618264 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.618273 | orchestrator | Sunday 01 June 2025 23:55:49 +0000 (0:00:00.548) 0:00:07.642 *********** 2025-06-01 23:57:33.618283 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618292 | orchestrator | 2025-06-01 23:57:33.618302 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.618311 | orchestrator | Sunday 01 June 2025 23:55:49 +0000 (0:00:00.144) 0:00:07.787 *********** 2025-06-01 23:57:33.618321 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618330 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.618340 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.618358 | orchestrator | 2025-06-01 23:57:33.618369 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.618379 | orchestrator | Sunday 01 June 2025 23:55:49 +0000 (0:00:00.308) 0:00:08.095 *********** 2025-06-01 23:57:33.618388 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.618535 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.618549 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.618559 | orchestrator | 2025-06-01 23:57:33.618569 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.618579 | orchestrator | Sunday 01 June 2025 23:55:50 +0000 (0:00:00.288) 0:00:08.384 *********** 2025-06-01 23:57:33.618588 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618598 | orchestrator | 2025-06-01 23:57:33.618607 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.618617 | orchestrator | Sunday 01 June 2025 23:55:50 +0000 (0:00:00.125) 0:00:08.509 *********** 2025-06-01 23:57:33.618626 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618636 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.618646 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.618655 | orchestrator | 2025-06-01 23:57:33.618665 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.618674 | orchestrator | Sunday 01 June 2025 23:55:50 +0000 (0:00:00.461) 0:00:08.971 *********** 2025-06-01 23:57:33.618684 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.618693 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.618703 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.618712 | orchestrator | 2025-06-01 23:57:33.618734 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.618745 | orchestrator | Sunday 01 June 2025 23:55:50 +0000 (0:00:00.296) 0:00:09.268 *********** 2025-06-01 23:57:33.618754 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618764 | orchestrator | 2025-06-01 23:57:33.618774 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.618783 | orchestrator | Sunday 01 June 2025 23:55:51 +0000 (0:00:00.137) 0:00:09.405 *********** 2025-06-01 23:57:33.618793 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618803 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.618812 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.618822 | orchestrator | 2025-06-01 23:57:33.618854 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.618865 | orchestrator | Sunday 01 June 2025 23:55:51 +0000 (0:00:00.296) 0:00:09.701 *********** 2025-06-01 23:57:33.618874 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.618884 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.618893 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.618903 | orchestrator | 2025-06-01 23:57:33.618913 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.618922 | orchestrator | Sunday 01 June 2025 23:55:51 +0000 (0:00:00.294) 0:00:09.996 *********** 2025-06-01 23:57:33.618932 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618941 | orchestrator | 2025-06-01 23:57:33.618950 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.618960 | orchestrator | Sunday 01 June 2025 23:55:51 +0000 (0:00:00.114) 0:00:10.110 *********** 2025-06-01 23:57:33.618970 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.618979 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.618989 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.618998 | orchestrator | 2025-06-01 23:57:33.619008 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.619018 | orchestrator | Sunday 01 June 2025 23:55:52 +0000 (0:00:00.516) 0:00:10.627 *********** 2025-06-01 23:57:33.619027 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.619037 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.619046 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.619056 | orchestrator | 2025-06-01 23:57:33.619065 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.619083 | orchestrator | Sunday 01 June 2025 23:55:52 +0000 (0:00:00.342) 0:00:10.969 *********** 2025-06-01 23:57:33.619093 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619102 | orchestrator | 2025-06-01 23:57:33.619112 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.619122 | orchestrator | Sunday 01 June 2025 23:55:52 +0000 (0:00:00.132) 0:00:11.101 *********** 2025-06-01 23:57:33.619131 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619141 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.619151 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.619160 | orchestrator | 2025-06-01 23:57:33.619170 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-01 23:57:33.619185 | orchestrator | Sunday 01 June 2025 23:55:53 +0000 (0:00:00.285) 0:00:11.386 *********** 2025-06-01 23:57:33.619196 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:57:33.619205 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:57:33.619215 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:57:33.619224 | orchestrator | 2025-06-01 23:57:33.619234 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-01 23:57:33.619244 | orchestrator | Sunday 01 June 2025 23:55:53 +0000 (0:00:00.495) 0:00:11.882 *********** 2025-06-01 23:57:33.619253 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619263 | orchestrator | 2025-06-01 23:57:33.619272 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-01 23:57:33.619282 | orchestrator | Sunday 01 June 2025 23:55:53 +0000 (0:00:00.149) 0:00:12.032 *********** 2025-06-01 23:57:33.619292 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619301 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.619311 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.619320 | orchestrator | 2025-06-01 23:57:33.619330 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-01 23:57:33.619339 | orchestrator | Sunday 01 June 2025 23:55:54 +0000 (0:00:00.322) 0:00:12.354 *********** 2025-06-01 23:57:33.619349 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:57:33.619358 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:57:33.619368 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:57:33.619377 | orchestrator | 2025-06-01 23:57:33.619387 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-01 23:57:33.619397 | orchestrator | Sunday 01 June 2025 23:55:55 +0000 (0:00:01.592) 0:00:13.947 *********** 2025-06-01 23:57:33.619406 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 23:57:33.619416 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 23:57:33.619425 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-01 23:57:33.619435 | orchestrator | 2025-06-01 23:57:33.619444 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-01 23:57:33.619454 | orchestrator | Sunday 01 June 2025 23:55:57 +0000 (0:00:02.169) 0:00:16.116 *********** 2025-06-01 23:57:33.619464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 23:57:33.619474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 23:57:33.619484 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-01 23:57:33.619493 | orchestrator | 2025-06-01 23:57:33.619503 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-01 23:57:33.619513 | orchestrator | Sunday 01 June 2025 23:55:59 +0000 (0:00:01.758) 0:00:17.875 *********** 2025-06-01 23:57:33.619529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 23:57:33.619539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 23:57:33.619562 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-01 23:57:33.619571 | orchestrator | 2025-06-01 23:57:33.619581 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-01 23:57:33.619591 | orchestrator | Sunday 01 June 2025 23:56:01 +0000 (0:00:01.563) 0:00:19.438 *********** 2025-06-01 23:57:33.619601 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619610 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.619620 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.619630 | orchestrator | 2025-06-01 23:57:33.619639 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-01 23:57:33.619649 | orchestrator | Sunday 01 June 2025 23:56:01 +0000 (0:00:00.270) 0:00:19.709 *********** 2025-06-01 23:57:33.619658 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619668 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.619678 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.619687 | orchestrator | 2025-06-01 23:57:33.619697 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:57:33.619707 | orchestrator | Sunday 01 June 2025 23:56:01 +0000 (0:00:00.281) 0:00:19.990 *********** 2025-06-01 23:57:33.619717 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:57:33.619726 | orchestrator | 2025-06-01 23:57:33.619736 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-01 23:57:33.619746 | orchestrator | Sunday 01 June 2025 23:56:02 +0000 (0:00:00.803) 0:00:20.793 *********** 2025-06-01 23:57:33.619764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.619787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.619811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.619842 | orchestrator | 2025-06-01 23:57:33.619852 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-01 23:57:33.619862 | orchestrator | Sunday 01 June 2025 23:56:03 +0000 (0:00:01.497) 0:00:22.290 *********** 2025-06-01 23:57:33.619882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:57:33.619893 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.619909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:57:33.619931 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.619948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:57:33.619959 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.619969 | orchestrator | 2025-06-01 23:57:33.619979 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-01 23:57:33.619988 | orchestrator | Sunday 01 June 2025 23:56:04 +0000 (0:00:00.620) 0:00:22.911 *********** 2025-06-01 23:57:33.620006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:57:33.620025 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.620041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:57:33.620060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-01 23:57:33.620077 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.620087 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.620097 | orchestrator | 2025-06-01 23:57:33.620106 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-01 23:57:33.620116 | orchestrator | Sunday 01 June 2025 23:56:05 +0000 (0:00:01.066) 0:00:23.978 *********** 2025-06-01 23:57:33.620132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.620156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.620173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-01 23:57:33.620190 | orchestrator | 2025-06-01 23:57:33.620200 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:57:33.620209 | orchestrator | Sunday 01 June 2025 23:56:06 +0000 (0:00:01.208) 0:00:25.187 *********** 2025-06-01 23:57:33.620219 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:57:33.620229 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:57:33.620238 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:57:33.620248 | orchestrator | 2025-06-01 23:57:33.620257 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-01 23:57:33.620267 | orchestrator | Sunday 01 June 2025 23:56:07 +0000 (0:00:00.305) 0:00:25.493 *********** 2025-06-01 23:57:33.620277 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:57:33.620287 | orchestrator | 2025-06-01 23:57:33.620299 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-01 23:57:33.620315 | orchestrator | Sunday 01 June 2025 23:56:07 +0000 (0:00:00.780) 0:00:26.273 *********** 2025-06-01 23:57:33.620331 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:57:33.620346 | orchestrator | 2025-06-01 23:57:33.620369 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-01 23:57:33.620386 | orchestrator | Sunday 01 June 2025 23:56:10 +0000 (0:00:02.114) 0:00:28.387 *********** 2025-06-01 23:57:33.620403 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:57:33.620421 | orchestrator | 2025-06-01 23:57:33.620432 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-01 23:57:33.620441 | orchestrator | Sunday 01 June 2025 23:56:12 +0000 (0:00:02.152) 0:00:30.540 *********** 2025-06-01 23:57:33.620451 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:57:33.620461 | orchestrator | 2025-06-01 23:57:33.620470 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 23:57:33.620480 | orchestrator | Sunday 01 June 2025 23:56:26 +0000 (0:00:14.678) 0:00:45.219 *********** 2025-06-01 23:57:33.620490 | orchestrator | 2025-06-01 23:57:33.620499 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 23:57:33.620509 | orchestrator | Sunday 01 June 2025 23:56:26 +0000 (0:00:00.063) 0:00:45.282 *********** 2025-06-01 23:57:33.620518 | orchestrator | 2025-06-01 23:57:33.620528 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-01 23:57:33.620538 | orchestrator | Sunday 01 June 2025 23:56:27 +0000 (0:00:00.062) 0:00:45.345 *********** 2025-06-01 23:57:33.620547 | orchestrator | 2025-06-01 23:57:33.620557 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-01 23:57:33.620566 | orchestrator | Sunday 01 June 2025 23:56:27 +0000 (0:00:00.065) 0:00:45.411 *********** 2025-06-01 23:57:33.620576 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:57:33.620586 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:57:33.620595 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:57:33.620605 | orchestrator | 2025-06-01 23:57:33.620628 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:57:33.620639 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-01 23:57:33.620649 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-01 23:57:33.620659 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-01 23:57:33.620678 | orchestrator | 2025-06-01 23:57:33.620688 | orchestrator | 2025-06-01 23:57:33.620698 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:57:33.620707 | orchestrator | Sunday 01 June 2025 23:57:30 +0000 (0:01:03.619) 0:01:49.031 *********** 2025-06-01 23:57:33.620717 | orchestrator | =============================================================================== 2025-06-01 23:57:33.620726 | orchestrator | horizon : Restart horizon container ------------------------------------ 63.62s 2025-06-01 23:57:33.620740 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.68s 2025-06-01 23:57:33.620750 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.17s 2025-06-01 23:57:33.620760 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.15s 2025-06-01 23:57:33.620770 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2025-06-01 23:57:33.620779 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.76s 2025-06-01 23:57:33.620789 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.59s 2025-06-01 23:57:33.620799 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.56s 2025-06-01 23:57:33.620810 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.50s 2025-06-01 23:57:33.620820 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.21s 2025-06-01 23:57:33.620858 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2025-06-01 23:57:33.620869 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.07s 2025-06-01 23:57:33.620880 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-06-01 23:57:33.620890 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-06-01 23:57:33.620901 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-06-01 23:57:33.620912 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.62s 2025-06-01 23:57:33.620923 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-06-01 23:57:33.620933 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-06-01 23:57:33.620944 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-06-01 23:57:33.620955 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2025-06-01 23:57:33.620966 | orchestrator | 2025-06-01 23:57:33 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:33.620977 | orchestrator | 2025-06-01 23:57:33 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:33.620988 | orchestrator | 2025-06-01 23:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:36.663679 | orchestrator | 2025-06-01 23:57:36 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:36.665795 | orchestrator | 2025-06-01 23:57:36 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:36.665858 | orchestrator | 2025-06-01 23:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:39.712169 | orchestrator | 2025-06-01 23:57:39 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:39.713692 | orchestrator | 2025-06-01 23:57:39 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:39.713956 | orchestrator | 2025-06-01 23:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:42.761657 | orchestrator | 2025-06-01 23:57:42 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:42.763468 | orchestrator | 2025-06-01 23:57:42 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:42.763532 | orchestrator | 2025-06-01 23:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:45.808060 | orchestrator | 2025-06-01 23:57:45 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:45.810174 | orchestrator | 2025-06-01 23:57:45 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:45.810312 | orchestrator | 2025-06-01 23:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:48.854990 | orchestrator | 2025-06-01 23:57:48 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:48.855988 | orchestrator | 2025-06-01 23:57:48 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:48.856653 | orchestrator | 2025-06-01 23:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:51.901819 | orchestrator | 2025-06-01 23:57:51 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:51.903351 | orchestrator | 2025-06-01 23:57:51 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:51.903388 | orchestrator | 2025-06-01 23:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:54.942667 | orchestrator | 2025-06-01 23:57:54 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state STARTED 2025-06-01 23:57:54.944055 | orchestrator | 2025-06-01 23:57:54 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:54.944108 | orchestrator | 2025-06-01 23:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:57:57.991291 | orchestrator | 2025-06-01 23:57:57 | INFO  | Task 61c7a2bd-da53-4724-874e-5791106711d6 is in state SUCCESS 2025-06-01 23:57:57.991399 | orchestrator | 2025-06-01 23:57:57 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:57:57.991415 | orchestrator | 2025-06-01 23:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:01.045570 | orchestrator | 2025-06-01 23:58:01 | INFO  | Task ee2f0c0b-5667-4f37-8ae1-7d63b23ff36d is in state STARTED 2025-06-01 23:58:01.046921 | orchestrator | 2025-06-01 23:58:01 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:01.050770 | orchestrator | 2025-06-01 23:58:01 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:01.051625 | orchestrator | 2025-06-01 23:58:01 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:01.051665 | orchestrator | 2025-06-01 23:58:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:04.100087 | orchestrator | 2025-06-01 23:58:04 | INFO  | Task ee2f0c0b-5667-4f37-8ae1-7d63b23ff36d is in state SUCCESS 2025-06-01 23:58:04.101036 | orchestrator | 2025-06-01 23:58:04 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:04.101870 | orchestrator | 2025-06-01 23:58:04 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:04.103901 | orchestrator | 2025-06-01 23:58:04 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:04.107284 | orchestrator | 2025-06-01 23:58:04 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:04.107789 | orchestrator | 2025-06-01 23:58:04 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:04.108735 | orchestrator | 2025-06-01 23:58:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:07.153081 | orchestrator | 2025-06-01 23:58:07 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:07.153131 | orchestrator | 2025-06-01 23:58:07 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:07.153135 | orchestrator | 2025-06-01 23:58:07 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:07.153310 | orchestrator | 2025-06-01 23:58:07 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:07.154532 | orchestrator | 2025-06-01 23:58:07 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:07.154578 | orchestrator | 2025-06-01 23:58:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:10.182378 | orchestrator | 2025-06-01 23:58:10 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:10.182757 | orchestrator | 2025-06-01 23:58:10 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:10.184746 | orchestrator | 2025-06-01 23:58:10 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:10.185726 | orchestrator | 2025-06-01 23:58:10 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:10.187791 | orchestrator | 2025-06-01 23:58:10 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:10.187899 | orchestrator | 2025-06-01 23:58:10 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:13.235323 | orchestrator | 2025-06-01 23:58:13 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:13.236995 | orchestrator | 2025-06-01 23:58:13 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:13.237032 | orchestrator | 2025-06-01 23:58:13 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:13.238125 | orchestrator | 2025-06-01 23:58:13 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:13.239110 | orchestrator | 2025-06-01 23:58:13 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:13.239174 | orchestrator | 2025-06-01 23:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:16.275739 | orchestrator | 2025-06-01 23:58:16 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:16.279036 | orchestrator | 2025-06-01 23:58:16 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:16.279138 | orchestrator | 2025-06-01 23:58:16 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:16.279563 | orchestrator | 2025-06-01 23:58:16 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:16.282755 | orchestrator | 2025-06-01 23:58:16 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:16.282843 | orchestrator | 2025-06-01 23:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:19.328173 | orchestrator | 2025-06-01 23:58:19 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:19.329708 | orchestrator | 2025-06-01 23:58:19 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:19.331403 | orchestrator | 2025-06-01 23:58:19 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:19.333209 | orchestrator | 2025-06-01 23:58:19 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:19.334203 | orchestrator | 2025-06-01 23:58:19 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:19.334299 | orchestrator | 2025-06-01 23:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:22.382756 | orchestrator | 2025-06-01 23:58:22 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:22.387331 | orchestrator | 2025-06-01 23:58:22 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:22.391977 | orchestrator | 2025-06-01 23:58:22 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:22.393323 | orchestrator | 2025-06-01 23:58:22 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:22.394980 | orchestrator | 2025-06-01 23:58:22 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:22.395016 | orchestrator | 2025-06-01 23:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:25.434986 | orchestrator | 2025-06-01 23:58:25 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:25.435349 | orchestrator | 2025-06-01 23:58:25 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:25.438350 | orchestrator | 2025-06-01 23:58:25 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:25.443101 | orchestrator | 2025-06-01 23:58:25 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:25.443169 | orchestrator | 2025-06-01 23:58:25 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state STARTED 2025-06-01 23:58:25.443193 | orchestrator | 2025-06-01 23:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:28.496550 | orchestrator | 2025-06-01 23:58:28 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:28.496666 | orchestrator | 2025-06-01 23:58:28 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:28.499088 | orchestrator | 2025-06-01 23:58:28 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:28.502159 | orchestrator | 2025-06-01 23:58:28 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:28.512311 | orchestrator | 2025-06-01 23:58:28.512408 | orchestrator | 2025-06-01 23:58:28.512434 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-01 23:58:28.512453 | orchestrator | 2025-06-01 23:58:28.512470 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-01 23:58:28.512488 | orchestrator | Sunday 01 June 2025 23:57:06 +0000 (0:00:00.235) 0:00:00.235 *********** 2025-06-01 23:58:28.512505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-01 23:58:28.512525 | orchestrator | 2025-06-01 23:58:28.512543 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-01 23:58:28.512561 | orchestrator | Sunday 01 June 2025 23:57:06 +0000 (0:00:00.215) 0:00:00.451 *********** 2025-06-01 23:58:28.512580 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-01 23:58:28.512598 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-01 23:58:28.512779 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-01 23:58:28.512838 | orchestrator | 2025-06-01 23:58:28.512868 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-01 23:58:28.513563 | orchestrator | Sunday 01 June 2025 23:57:07 +0000 (0:00:01.222) 0:00:01.674 *********** 2025-06-01 23:58:28.513606 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-01 23:58:28.513639 | orchestrator | 2025-06-01 23:58:28.513651 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-01 23:58:28.513662 | orchestrator | Sunday 01 June 2025 23:57:08 +0000 (0:00:01.113) 0:00:02.787 *********** 2025-06-01 23:58:28.513673 | orchestrator | changed: [testbed-manager] 2025-06-01 23:58:28.513684 | orchestrator | 2025-06-01 23:58:28.513694 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-01 23:58:28.513705 | orchestrator | Sunday 01 June 2025 23:57:09 +0000 (0:00:00.960) 0:00:03.747 *********** 2025-06-01 23:58:28.513716 | orchestrator | changed: [testbed-manager] 2025-06-01 23:58:28.513726 | orchestrator | 2025-06-01 23:58:28.513737 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-01 23:58:28.513748 | orchestrator | Sunday 01 June 2025 23:57:10 +0000 (0:00:00.889) 0:00:04.636 *********** 2025-06-01 23:58:28.513758 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-01 23:58:28.513769 | orchestrator | ok: [testbed-manager] 2025-06-01 23:58:28.513780 | orchestrator | 2025-06-01 23:58:28.513790 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-01 23:58:28.513801 | orchestrator | Sunday 01 June 2025 23:57:47 +0000 (0:00:36.937) 0:00:41.574 *********** 2025-06-01 23:58:28.513834 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-01 23:58:28.513845 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-01 23:58:28.513857 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-01 23:58:28.513868 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-01 23:58:28.513879 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-01 23:58:28.513890 | orchestrator | 2025-06-01 23:58:28.513901 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-01 23:58:28.513912 | orchestrator | Sunday 01 June 2025 23:57:51 +0000 (0:00:04.126) 0:00:45.700 *********** 2025-06-01 23:58:28.513922 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-01 23:58:28.513933 | orchestrator | 2025-06-01 23:58:28.513943 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-01 23:58:28.513954 | orchestrator | Sunday 01 June 2025 23:57:51 +0000 (0:00:00.448) 0:00:46.148 *********** 2025-06-01 23:58:28.513964 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:58:28.513975 | orchestrator | 2025-06-01 23:58:28.513986 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-01 23:58:28.513996 | orchestrator | Sunday 01 June 2025 23:57:52 +0000 (0:00:00.132) 0:00:46.281 *********** 2025-06-01 23:58:28.514007 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:58:28.514080 | orchestrator | 2025-06-01 23:58:28.514094 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-01 23:58:28.514105 | orchestrator | Sunday 01 June 2025 23:57:52 +0000 (0:00:00.289) 0:00:46.570 *********** 2025-06-01 23:58:28.514116 | orchestrator | changed: [testbed-manager] 2025-06-01 23:58:28.514126 | orchestrator | 2025-06-01 23:58:28.514137 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-01 23:58:28.514148 | orchestrator | Sunday 01 June 2025 23:57:53 +0000 (0:00:01.408) 0:00:47.978 *********** 2025-06-01 23:58:28.514173 | orchestrator | changed: [testbed-manager] 2025-06-01 23:58:28.514194 | orchestrator | 2025-06-01 23:58:28.514207 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-01 23:58:28.514219 | orchestrator | Sunday 01 June 2025 23:57:54 +0000 (0:00:00.881) 0:00:48.859 *********** 2025-06-01 23:58:28.514230 | orchestrator | changed: [testbed-manager] 2025-06-01 23:58:28.514243 | orchestrator | 2025-06-01 23:58:28.514255 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-01 23:58:28.514267 | orchestrator | Sunday 01 June 2025 23:57:55 +0000 (0:00:00.580) 0:00:49.440 *********** 2025-06-01 23:58:28.514280 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-01 23:58:28.514291 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-01 23:58:28.514312 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-01 23:58:28.514323 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-01 23:58:28.514334 | orchestrator | 2025-06-01 23:58:28.514345 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:58:28.514356 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-01 23:58:28.514368 | orchestrator | 2025-06-01 23:58:28.514378 | orchestrator | 2025-06-01 23:58:28.514444 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:58:28.514458 | orchestrator | Sunday 01 June 2025 23:57:56 +0000 (0:00:01.500) 0:00:50.941 *********** 2025-06-01 23:58:28.514468 | orchestrator | =============================================================================== 2025-06-01 23:58:28.514479 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.94s 2025-06-01 23:58:28.514490 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.13s 2025-06-01 23:58:28.514500 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2025-06-01 23:58:28.514511 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.41s 2025-06-01 23:58:28.514522 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-06-01 23:58:28.514532 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-06-01 23:58:28.514543 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2025-06-01 23:58:28.514553 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2025-06-01 23:58:28.514564 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.88s 2025-06-01 23:58:28.514575 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-06-01 23:58:28.514591 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-06-01 23:58:28.514602 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-01 23:58:28.514612 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-06-01 23:58:28.514623 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-06-01 23:58:28.514634 | orchestrator | 2025-06-01 23:58:28.514644 | orchestrator | 2025-06-01 23:58:28.514655 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:58:28.514666 | orchestrator | 2025-06-01 23:58:28.514676 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:58:28.514687 | orchestrator | Sunday 01 June 2025 23:58:01 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-01 23:58:28.514697 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.514708 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.514719 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.514729 | orchestrator | 2025-06-01 23:58:28.514740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:58:28.514751 | orchestrator | Sunday 01 June 2025 23:58:01 +0000 (0:00:00.309) 0:00:00.484 *********** 2025-06-01 23:58:28.514761 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-01 23:58:28.514772 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-01 23:58:28.514783 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-01 23:58:28.514794 | orchestrator | 2025-06-01 23:58:28.514804 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-01 23:58:28.514844 | orchestrator | 2025-06-01 23:58:28.514856 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-01 23:58:28.514866 | orchestrator | Sunday 01 June 2025 23:58:02 +0000 (0:00:00.688) 0:00:01.173 *********** 2025-06-01 23:58:28.514877 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.514888 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.514899 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.514917 | orchestrator | 2025-06-01 23:58:28.514928 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:58:28.514939 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:58:28.514951 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:58:28.514962 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:58:28.514973 | orchestrator | 2025-06-01 23:58:28.514984 | orchestrator | 2025-06-01 23:58:28.514994 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:58:28.515005 | orchestrator | Sunday 01 June 2025 23:58:02 +0000 (0:00:00.713) 0:00:01.886 *********** 2025-06-01 23:58:28.515016 | orchestrator | =============================================================================== 2025-06-01 23:58:28.515027 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.71s 2025-06-01 23:58:28.515037 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2025-06-01 23:58:28.515048 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-01 23:58:28.515059 | orchestrator | 2025-06-01 23:58:28.515069 | orchestrator | 2025-06-01 23:58:28.515080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:58:28.515091 | orchestrator | 2025-06-01 23:58:28.515101 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:58:28.515112 | orchestrator | Sunday 01 June 2025 23:55:42 +0000 (0:00:00.268) 0:00:00.268 *********** 2025-06-01 23:58:28.515123 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.515133 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.515144 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.515155 | orchestrator | 2025-06-01 23:58:28.515166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:58:28.515176 | orchestrator | Sunday 01 June 2025 23:55:42 +0000 (0:00:00.270) 0:00:00.539 *********** 2025-06-01 23:58:28.515187 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-01 23:58:28.515198 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-01 23:58:28.515209 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-01 23:58:28.515220 | orchestrator | 2025-06-01 23:58:28.515231 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-01 23:58:28.515241 | orchestrator | 2025-06-01 23:58:28.515291 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:58:28.515305 | orchestrator | Sunday 01 June 2025 23:55:42 +0000 (0:00:00.418) 0:00:00.957 *********** 2025-06-01 23:58:28.515324 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:58:28.515343 | orchestrator | 2025-06-01 23:58:28.515362 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-01 23:58:28.515380 | orchestrator | Sunday 01 June 2025 23:55:43 +0000 (0:00:00.538) 0:00:01.495 *********** 2025-06-01 23:58:28.515413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.515448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.515462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.515475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.515528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.515547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.515565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.515577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.515589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.515600 | orchestrator | 2025-06-01 23:58:28.515612 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-01 23:58:28.515623 | orchestrator | Sunday 01 June 2025 23:55:45 +0000 (0:00:01.765) 0:00:03.261 *********** 2025-06-01 23:58:28.515634 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-01 23:58:28.515645 | orchestrator | 2025-06-01 23:58:28.515656 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-01 23:58:28.515667 | orchestrator | Sunday 01 June 2025 23:55:45 +0000 (0:00:00.854) 0:00:04.116 *********** 2025-06-01 23:58:28.515678 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.515689 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.515699 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.515710 | orchestrator | 2025-06-01 23:58:28.515727 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-01 23:58:28.515747 | orchestrator | Sunday 01 June 2025 23:55:46 +0000 (0:00:00.466) 0:00:04.583 *********** 2025-06-01 23:58:28.515765 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:58:28.515783 | orchestrator | 2025-06-01 23:58:28.515801 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:58:28.515842 | orchestrator | Sunday 01 June 2025 23:55:46 +0000 (0:00:00.654) 0:00:05.238 *********** 2025-06-01 23:58:28.515863 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:58:28.515884 | orchestrator | 2025-06-01 23:58:28.515954 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-01 23:58:28.515968 | orchestrator | Sunday 01 June 2025 23:55:47 +0000 (0:00:00.532) 0:00:05.770 *********** 2025-06-01 23:58:28.515988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.516012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.516025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.516038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516181 | orchestrator | 2025-06-01 23:58:28.516202 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-01 23:58:28.516224 | orchestrator | Sunday 01 June 2025 23:55:50 +0000 (0:00:03.374) 0:00:09.144 *********** 2025-06-01 23:58:28.516245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:58:28.516274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.516317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:58:28.516338 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.516357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:58:28.516379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.516399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:58:28.516417 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.516437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:58:28.516459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.516476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:58:28.516487 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.516498 | orchestrator | 2025-06-01 23:58:28.516509 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-01 23:58:28.516520 | orchestrator | Sunday 01 June 2025 23:55:51 +0000 (0:00:00.552) 0:00:09.697 *********** 2025-06-01 23:58:28.516532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:58:28.516544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.516555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:58:28.516573 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.516591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-06-01 23:58:28 | INFO  | Task 14223f4b-1105-4bc8-b61f-8af03e11e27b is in state SUCCESS 2025-06-01 23:58:28.516609 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:58:28.516621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.516632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:58:28.516643 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.516655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-01 23:58:28.516667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.516693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-01 23:58:28.516705 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.516716 | orchestrator | 2025-06-01 23:58:28.516727 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-01 23:58:28.516737 | orchestrator | Sunday 01 June 2025 23:55:52 +0000 (0:00:00.753) 0:00:10.450 *********** 2025-06-01 23:58:28.516753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.516766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.516778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.516804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.516978 | orchestrator | 2025-06-01 23:58:28.516991 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-01 23:58:28.517003 | orchestrator | Sunday 01 June 2025 23:55:55 +0000 (0:00:03.439) 0:00:13.890 *********** 2025-06-01 23:58:28.517025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.517038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.517055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.517067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.517079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.517104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.517116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.517131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.517143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.517154 | orchestrator | 2025-06-01 23:58:28.517165 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-01 23:58:28.517176 | orchestrator | Sunday 01 June 2025 23:56:00 +0000 (0:00:05.018) 0:00:18.908 *********** 2025-06-01 23:58:28.517187 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.517198 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:58:28.517209 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:58:28.517219 | orchestrator | 2025-06-01 23:58:28.517230 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-01 23:58:28.517241 | orchestrator | Sunday 01 June 2025 23:56:02 +0000 (0:00:01.386) 0:00:20.295 *********** 2025-06-01 23:58:28.517258 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.517269 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.517279 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.517290 | orchestrator | 2025-06-01 23:58:28.517301 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-01 23:58:28.517312 | orchestrator | Sunday 01 June 2025 23:56:02 +0000 (0:00:00.519) 0:00:20.815 *********** 2025-06-01 23:58:28.517326 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.517345 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.517362 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.517379 | orchestrator | 2025-06-01 23:58:28.517397 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-01 23:58:28.517416 | orchestrator | Sunday 01 June 2025 23:56:03 +0000 (0:00:00.500) 0:00:21.315 *********** 2025-06-01 23:58:28.517435 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.517455 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.517473 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.517490 | orchestrator | 2025-06-01 23:58:28.517501 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-01 23:58:28.517511 | orchestrator | Sunday 01 June 2025 23:56:03 +0000 (0:00:00.281) 0:00:21.597 *********** 2025-06-01 23:58:28.517531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.517544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.517562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.517586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.517598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.517610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-01 23:58:28.517628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.517645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.517656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.517674 | orchestrator | 2025-06-01 23:58:28.517685 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:58:28.517702 | orchestrator | Sunday 01 June 2025 23:56:05 +0000 (0:00:02.219) 0:00:23.817 *********** 2025-06-01 23:58:28.517721 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.517739 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.517756 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.517774 | orchestrator | 2025-06-01 23:58:28.517793 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-01 23:58:28.517874 | orchestrator | Sunday 01 June 2025 23:56:05 +0000 (0:00:00.309) 0:00:24.126 *********** 2025-06-01 23:58:28.517888 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 23:58:28.517900 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 23:58:28.517911 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-01 23:58:28.517922 | orchestrator | 2025-06-01 23:58:28.517932 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-01 23:58:28.517943 | orchestrator | Sunday 01 June 2025 23:56:07 +0000 (0:00:02.059) 0:00:26.186 *********** 2025-06-01 23:58:28.517953 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:58:28.517969 | orchestrator | 2025-06-01 23:58:28.517987 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-01 23:58:28.518006 | orchestrator | Sunday 01 June 2025 23:56:08 +0000 (0:00:00.895) 0:00:27.081 *********** 2025-06-01 23:58:28.518078 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.518091 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.518101 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.518112 | orchestrator | 2025-06-01 23:58:28.518123 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-01 23:58:28.518133 | orchestrator | Sunday 01 June 2025 23:56:09 +0000 (0:00:00.526) 0:00:27.608 *********** 2025-06-01 23:58:28.518144 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-01 23:58:28.518154 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-01 23:58:28.518165 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-01 23:58:28.518175 | orchestrator | 2025-06-01 23:58:28.518186 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-01 23:58:28.518196 | orchestrator | Sunday 01 June 2025 23:56:10 +0000 (0:00:01.014) 0:00:28.623 *********** 2025-06-01 23:58:28.518207 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.518218 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.518229 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.518239 | orchestrator | 2025-06-01 23:58:28.518250 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-01 23:58:28.518260 | orchestrator | Sunday 01 June 2025 23:56:10 +0000 (0:00:00.306) 0:00:28.929 *********** 2025-06-01 23:58:28.518271 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 23:58:28.518287 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 23:58:28.518306 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-01 23:58:28.518324 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 23:58:28.518355 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 23:58:28.518376 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-01 23:58:28.518396 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 23:58:28.518414 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 23:58:28.518437 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-01 23:58:28.518447 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 23:58:28.518456 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 23:58:28.518466 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-01 23:58:28.518475 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 23:58:28.518485 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 23:58:28.518494 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-01 23:58:28.518504 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:58:28.518514 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:58:28.518523 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-01 23:58:28.518533 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:58:28.518542 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:58:28.518551 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-01 23:58:28.518561 | orchestrator | 2025-06-01 23:58:28.518570 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-01 23:58:28.518580 | orchestrator | Sunday 01 June 2025 23:56:19 +0000 (0:00:08.550) 0:00:37.480 *********** 2025-06-01 23:58:28.518589 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:58:28.518598 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:58:28.518608 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-01 23:58:28.518617 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:58:28.518627 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:58:28.518636 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-01 23:58:28.518646 | orchestrator | 2025-06-01 23:58:28.518655 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-01 23:58:28.518677 | orchestrator | Sunday 01 June 2025 23:56:21 +0000 (0:00:02.514) 0:00:39.995 *********** 2025-06-01 23:58:28.518688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.518781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.518871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-01 23:58:28.518886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.518897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.518908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-01 23:58:28.518918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.518942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.518958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-01 23:58:28.518968 | orchestrator | 2025-06-01 23:58:28.518978 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:58:28.518996 | orchestrator | Sunday 01 June 2025 23:56:23 +0000 (0:00:02.207) 0:00:42.202 *********** 2025-06-01 23:58:28.519012 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.519029 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.519046 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.519063 | orchestrator | 2025-06-01 23:58:28.519079 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-01 23:58:28.519095 | orchestrator | Sunday 01 June 2025 23:56:24 +0000 (0:00:00.284) 0:00:42.487 *********** 2025-06-01 23:58:28.519105 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519114 | orchestrator | 2025-06-01 23:58:28.519124 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-01 23:58:28.519133 | orchestrator | Sunday 01 June 2025 23:56:26 +0000 (0:00:02.204) 0:00:44.692 *********** 2025-06-01 23:58:28.519143 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519152 | orchestrator | 2025-06-01 23:58:28.519161 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-01 23:58:28.519171 | orchestrator | Sunday 01 June 2025 23:56:28 +0000 (0:00:02.534) 0:00:47.226 *********** 2025-06-01 23:58:28.519180 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.519190 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.519199 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.519209 | orchestrator | 2025-06-01 23:58:28.519221 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-01 23:58:28.519245 | orchestrator | Sunday 01 June 2025 23:56:29 +0000 (0:00:00.861) 0:00:48.088 *********** 2025-06-01 23:58:28.519265 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.519280 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.519296 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.519312 | orchestrator | 2025-06-01 23:58:28.519325 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-01 23:58:28.519337 | orchestrator | Sunday 01 June 2025 23:56:30 +0000 (0:00:00.309) 0:00:48.398 *********** 2025-06-01 23:58:28.519349 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.519361 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.519383 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.519396 | orchestrator | 2025-06-01 23:58:28.519408 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-01 23:58:28.519421 | orchestrator | Sunday 01 June 2025 23:56:30 +0000 (0:00:00.333) 0:00:48.731 *********** 2025-06-01 23:58:28.519433 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519445 | orchestrator | 2025-06-01 23:58:28.519457 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-01 23:58:28.519470 | orchestrator | Sunday 01 June 2025 23:56:43 +0000 (0:00:13.036) 0:01:01.768 *********** 2025-06-01 23:58:28.519482 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519494 | orchestrator | 2025-06-01 23:58:28.519507 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 23:58:28.519519 | orchestrator | Sunday 01 June 2025 23:56:52 +0000 (0:00:09.149) 0:01:10.917 *********** 2025-06-01 23:58:28.519532 | orchestrator | 2025-06-01 23:58:28.519544 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 23:58:28.519557 | orchestrator | Sunday 01 June 2025 23:56:52 +0000 (0:00:00.266) 0:01:11.183 *********** 2025-06-01 23:58:28.519570 | orchestrator | 2025-06-01 23:58:28.519583 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-01 23:58:28.519594 | orchestrator | Sunday 01 June 2025 23:56:53 +0000 (0:00:00.063) 0:01:11.247 *********** 2025-06-01 23:58:28.519606 | orchestrator | 2025-06-01 23:58:28.519618 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-01 23:58:28.519632 | orchestrator | Sunday 01 June 2025 23:56:53 +0000 (0:00:00.060) 0:01:11.308 *********** 2025-06-01 23:58:28.519645 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519659 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:58:28.519673 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:58:28.519687 | orchestrator | 2025-06-01 23:58:28.519701 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-01 23:58:28.519714 | orchestrator | Sunday 01 June 2025 23:57:20 +0000 (0:00:27.261) 0:01:38.569 *********** 2025-06-01 23:58:28.519726 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519738 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:58:28.519751 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:58:28.519764 | orchestrator | 2025-06-01 23:58:28.519778 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-01 23:58:28.519802 | orchestrator | Sunday 01 June 2025 23:57:30 +0000 (0:00:10.306) 0:01:48.876 *********** 2025-06-01 23:58:28.519838 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.519852 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:58:28.519864 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:58:28.519876 | orchestrator | 2025-06-01 23:58:28.519889 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:58:28.519903 | orchestrator | Sunday 01 June 2025 23:57:36 +0000 (0:00:06.107) 0:01:54.983 *********** 2025-06-01 23:58:28.519917 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-01 23:58:28.519932 | orchestrator | 2025-06-01 23:58:28.519944 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-01 23:58:28.519958 | orchestrator | Sunday 01 June 2025 23:57:37 +0000 (0:00:00.761) 0:01:55.745 *********** 2025-06-01 23:58:28.519972 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:58:28.519985 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.520000 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:58:28.520013 | orchestrator | 2025-06-01 23:58:28.520027 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-01 23:58:28.520041 | orchestrator | Sunday 01 June 2025 23:57:38 +0000 (0:00:00.699) 0:01:56.445 *********** 2025-06-01 23:58:28.520054 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:58:28.520069 | orchestrator | 2025-06-01 23:58:28.520083 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-01 23:58:28.520117 | orchestrator | Sunday 01 June 2025 23:57:40 +0000 (0:00:01.867) 0:01:58.312 *********** 2025-06-01 23:58:28.520132 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-01 23:58:28.520145 | orchestrator | 2025-06-01 23:58:28.520160 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-01 23:58:28.520173 | orchestrator | Sunday 01 June 2025 23:57:49 +0000 (0:00:09.594) 0:02:07.907 *********** 2025-06-01 23:58:28.520187 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-01 23:58:28.520201 | orchestrator | 2025-06-01 23:58:28.520215 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-01 23:58:28.520228 | orchestrator | Sunday 01 June 2025 23:58:08 +0000 (0:00:19.153) 0:02:27.061 *********** 2025-06-01 23:58:28.520241 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-01 23:58:28.520256 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-01 23:58:28.520269 | orchestrator | 2025-06-01 23:58:28.520282 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-01 23:58:28.520295 | orchestrator | Sunday 01 June 2025 23:58:21 +0000 (0:00:12.280) 0:02:39.341 *********** 2025-06-01 23:58:28.520309 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.520323 | orchestrator | 2025-06-01 23:58:28.520337 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-01 23:58:28.520351 | orchestrator | Sunday 01 June 2025 23:58:21 +0000 (0:00:00.350) 0:02:39.691 *********** 2025-06-01 23:58:28.520365 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.520379 | orchestrator | 2025-06-01 23:58:28.520392 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-01 23:58:28.520405 | orchestrator | Sunday 01 June 2025 23:58:21 +0000 (0:00:00.138) 0:02:39.830 *********** 2025-06-01 23:58:28.520419 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.520432 | orchestrator | 2025-06-01 23:58:28.520446 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-01 23:58:28.520459 | orchestrator | Sunday 01 June 2025 23:58:21 +0000 (0:00:00.130) 0:02:39.961 *********** 2025-06-01 23:58:28.520472 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.520485 | orchestrator | 2025-06-01 23:58:28.520497 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-01 23:58:28.520510 | orchestrator | Sunday 01 June 2025 23:58:22 +0000 (0:00:00.369) 0:02:40.330 *********** 2025-06-01 23:58:28.520523 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:58:28.520535 | orchestrator | 2025-06-01 23:58:28.520549 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-01 23:58:28.520562 | orchestrator | Sunday 01 June 2025 23:58:25 +0000 (0:00:03.109) 0:02:43.440 *********** 2025-06-01 23:58:28.520575 | orchestrator | skipping: [testbed-node-0] 2025-06-01 23:58:28.520588 | orchestrator | skipping: [testbed-node-1] 2025-06-01 23:58:28.520603 | orchestrator | skipping: [testbed-node-2] 2025-06-01 23:58:28.520616 | orchestrator | 2025-06-01 23:58:28.520629 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:58:28.520643 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-01 23:58:28.520657 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-01 23:58:28.520671 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-01 23:58:28.520684 | orchestrator | 2025-06-01 23:58:28.520698 | orchestrator | 2025-06-01 23:58:28.520713 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:58:28.520728 | orchestrator | Sunday 01 June 2025 23:58:25 +0000 (0:00:00.630) 0:02:44.070 *********** 2025-06-01 23:58:28.520753 | orchestrator | =============================================================================== 2025-06-01 23:58:28.520767 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 27.26s 2025-06-01 23:58:28.520781 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.15s 2025-06-01 23:58:28.520794 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.04s 2025-06-01 23:58:28.520837 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 12.28s 2025-06-01 23:58:28.520851 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.31s 2025-06-01 23:58:28.520865 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.59s 2025-06-01 23:58:28.520880 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.15s 2025-06-01 23:58:28.520893 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.55s 2025-06-01 23:58:28.520906 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.11s 2025-06-01 23:58:28.520920 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.02s 2025-06-01 23:58:28.520934 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.44s 2025-06-01 23:58:28.520948 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.37s 2025-06-01 23:58:28.520961 | orchestrator | keystone : Creating default user role ----------------------------------- 3.11s 2025-06-01 23:58:28.520974 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.53s 2025-06-01 23:58:28.520988 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.51s 2025-06-01 23:58:28.521001 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.22s 2025-06-01 23:58:28.521023 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.21s 2025-06-01 23:58:28.521037 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.20s 2025-06-01 23:58:28.521050 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.06s 2025-06-01 23:58:28.521064 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.87s 2025-06-01 23:58:28.521077 | orchestrator | 2025-06-01 23:58:28 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:28.521091 | orchestrator | 2025-06-01 23:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:31.548711 | orchestrator | 2025-06-01 23:58:31 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:31.549313 | orchestrator | 2025-06-01 23:58:31 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:31.549930 | orchestrator | 2025-06-01 23:58:31 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:31.550702 | orchestrator | 2025-06-01 23:58:31 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:31.552290 | orchestrator | 2025-06-01 23:58:31 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:31.555063 | orchestrator | 2025-06-01 23:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:34.580420 | orchestrator | 2025-06-01 23:58:34 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:34.580538 | orchestrator | 2025-06-01 23:58:34 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:34.580905 | orchestrator | 2025-06-01 23:58:34 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:34.581378 | orchestrator | 2025-06-01 23:58:34 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:34.581909 | orchestrator | 2025-06-01 23:58:34 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:34.582604 | orchestrator | 2025-06-01 23:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:37.601488 | orchestrator | 2025-06-01 23:58:37 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:37.601629 | orchestrator | 2025-06-01 23:58:37 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:37.601759 | orchestrator | 2025-06-01 23:58:37 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:37.603247 | orchestrator | 2025-06-01 23:58:37 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:37.603682 | orchestrator | 2025-06-01 23:58:37 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:37.603707 | orchestrator | 2025-06-01 23:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:40.629990 | orchestrator | 2025-06-01 23:58:40 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:40.630266 | orchestrator | 2025-06-01 23:58:40 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state STARTED 2025-06-01 23:58:40.630288 | orchestrator | 2025-06-01 23:58:40 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:40.630313 | orchestrator | 2025-06-01 23:58:40 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:40.630884 | orchestrator | 2025-06-01 23:58:40 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:40.632872 | orchestrator | 2025-06-01 23:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:43.661921 | orchestrator | 2025-06-01 23:58:43 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:43.662620 | orchestrator | 2025-06-01 23:58:43 | INFO  | Task d931323b-0733-4128-b8cd-9a801ce2734e is in state SUCCESS 2025-06-01 23:58:43.662665 | orchestrator | 2025-06-01 23:58:43 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:43.663189 | orchestrator | 2025-06-01 23:58:43 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:43.663906 | orchestrator | 2025-06-01 23:58:43 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:58:43.664871 | orchestrator | 2025-06-01 23:58:43 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:43.664957 | orchestrator | 2025-06-01 23:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:46.700239 | orchestrator | 2025-06-01 23:58:46 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:46.700578 | orchestrator | 2025-06-01 23:58:46 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:46.701294 | orchestrator | 2025-06-01 23:58:46 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:46.702391 | orchestrator | 2025-06-01 23:58:46 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:58:46.703020 | orchestrator | 2025-06-01 23:58:46 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:46.703149 | orchestrator | 2025-06-01 23:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:49.746293 | orchestrator | 2025-06-01 23:58:49 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:49.747966 | orchestrator | 2025-06-01 23:58:49 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:49.750319 | orchestrator | 2025-06-01 23:58:49 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:49.752445 | orchestrator | 2025-06-01 23:58:49 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:58:49.754551 | orchestrator | 2025-06-01 23:58:49 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:49.755000 | orchestrator | 2025-06-01 23:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:52.790328 | orchestrator | 2025-06-01 23:58:52 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:52.791495 | orchestrator | 2025-06-01 23:58:52 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:52.791991 | orchestrator | 2025-06-01 23:58:52 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:52.792997 | orchestrator | 2025-06-01 23:58:52 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:58:52.795518 | orchestrator | 2025-06-01 23:58:52 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:52.795552 | orchestrator | 2025-06-01 23:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:55.831420 | orchestrator | 2025-06-01 23:58:55 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:55.831530 | orchestrator | 2025-06-01 23:58:55 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:55.832181 | orchestrator | 2025-06-01 23:58:55 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:55.833238 | orchestrator | 2025-06-01 23:58:55 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:58:55.834112 | orchestrator | 2025-06-01 23:58:55 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:55.834211 | orchestrator | 2025-06-01 23:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:58:58.871215 | orchestrator | 2025-06-01 23:58:58 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:58:58.871614 | orchestrator | 2025-06-01 23:58:58 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:58:58.874202 | orchestrator | 2025-06-01 23:58:58 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:58:58.874226 | orchestrator | 2025-06-01 23:58:58 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:58:58.874845 | orchestrator | 2025-06-01 23:58:58 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:58:58.874944 | orchestrator | 2025-06-01 23:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:01.909885 | orchestrator | 2025-06-01 23:59:01 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:01.910595 | orchestrator | 2025-06-01 23:59:01 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:59:01.911450 | orchestrator | 2025-06-01 23:59:01 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:01.912283 | orchestrator | 2025-06-01 23:59:01 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:01.913280 | orchestrator | 2025-06-01 23:59:01 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:01.913367 | orchestrator | 2025-06-01 23:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:04.947036 | orchestrator | 2025-06-01 23:59:04 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:04.947737 | orchestrator | 2025-06-01 23:59:04 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:59:04.948485 | orchestrator | 2025-06-01 23:59:04 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:04.949544 | orchestrator | 2025-06-01 23:59:04 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:04.950664 | orchestrator | 2025-06-01 23:59:04 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:04.950691 | orchestrator | 2025-06-01 23:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:07.986207 | orchestrator | 2025-06-01 23:59:07 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:07.987038 | orchestrator | 2025-06-01 23:59:07 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:59:07.987670 | orchestrator | 2025-06-01 23:59:07 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:07.988631 | orchestrator | 2025-06-01 23:59:07 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:07.990120 | orchestrator | 2025-06-01 23:59:07 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:07.991466 | orchestrator | 2025-06-01 23:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:11.042167 | orchestrator | 2025-06-01 23:59:11 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:11.042946 | orchestrator | 2025-06-01 23:59:11 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:59:11.044450 | orchestrator | 2025-06-01 23:59:11 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:11.044676 | orchestrator | 2025-06-01 23:59:11 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:11.047772 | orchestrator | 2025-06-01 23:59:11 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:11.047871 | orchestrator | 2025-06-01 23:59:11 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:14.080544 | orchestrator | 2025-06-01 23:59:14 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:14.080929 | orchestrator | 2025-06-01 23:59:14 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:59:14.081932 | orchestrator | 2025-06-01 23:59:14 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:14.082821 | orchestrator | 2025-06-01 23:59:14 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:14.083571 | orchestrator | 2025-06-01 23:59:14 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:14.083595 | orchestrator | 2025-06-01 23:59:14 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:17.118892 | orchestrator | 2025-06-01 23:59:17 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:17.119479 | orchestrator | 2025-06-01 23:59:17 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state STARTED 2025-06-01 23:59:17.120691 | orchestrator | 2025-06-01 23:59:17 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:17.121546 | orchestrator | 2025-06-01 23:59:17 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:17.122594 | orchestrator | 2025-06-01 23:59:17 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:17.122667 | orchestrator | 2025-06-01 23:59:17 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:20.160145 | orchestrator | 2025-06-01 23:59:20 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:20.160408 | orchestrator | 2025-06-01 23:59:20 | INFO  | Task b60f82a8-ebb2-4409-bf44-4901ccd6b3a0 is in state SUCCESS 2025-06-01 23:59:20.160433 | orchestrator | 2025-06-01 23:59:20.160446 | orchestrator | 2025-06-01 23:59:20.160459 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-01 23:59:20.160471 | orchestrator | 2025-06-01 23:59:20.160483 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-01 23:59:20.160495 | orchestrator | Sunday 01 June 2025 23:58:09 +0000 (0:00:00.372) 0:00:00.372 *********** 2025-06-01 23:59:20.160507 | orchestrator | ok: [testbed-node-0] 2025-06-01 23:59:20.160537 | orchestrator | ok: [testbed-node-1] 2025-06-01 23:59:20.160548 | orchestrator | ok: [testbed-node-2] 2025-06-01 23:59:20.160559 | orchestrator | ok: [testbed-manager] 2025-06-01 23:59:20.160570 | orchestrator | ok: [testbed-node-3] 2025-06-01 23:59:20.160581 | orchestrator | ok: [testbed-node-4] 2025-06-01 23:59:20.160593 | orchestrator | ok: [testbed-node-5] 2025-06-01 23:59:20.160604 | orchestrator | 2025-06-01 23:59:20.160615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-01 23:59:20.160627 | orchestrator | Sunday 01 June 2025 23:58:10 +0000 (0:00:00.994) 0:00:01.367 *********** 2025-06-01 23:59:20.160638 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160650 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160661 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160673 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160684 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160695 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160706 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-01 23:59:20.160717 | orchestrator | 2025-06-01 23:59:20.160728 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-01 23:59:20.160739 | orchestrator | 2025-06-01 23:59:20.160750 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-01 23:59:20.160761 | orchestrator | Sunday 01 June 2025 23:58:11 +0000 (0:00:01.120) 0:00:02.488 *********** 2025-06-01 23:59:20.160773 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-01 23:59:20.160786 | orchestrator | 2025-06-01 23:59:20.160832 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-01 23:59:20.160843 | orchestrator | Sunday 01 June 2025 23:58:12 +0000 (0:00:01.680) 0:00:04.168 *********** 2025-06-01 23:59:20.160854 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-01 23:59:20.160865 | orchestrator | 2025-06-01 23:59:20.160876 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-01 23:59:20.160888 | orchestrator | Sunday 01 June 2025 23:58:16 +0000 (0:00:03.779) 0:00:07.947 *********** 2025-06-01 23:59:20.160906 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-01 23:59:20.160927 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-01 23:59:20.160945 | orchestrator | 2025-06-01 23:59:20.160962 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-01 23:59:20.160973 | orchestrator | Sunday 01 June 2025 23:58:23 +0000 (0:00:06.844) 0:00:14.792 *********** 2025-06-01 23:59:20.160985 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-01 23:59:20.161021 | orchestrator | 2025-06-01 23:59:20.161079 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-01 23:59:20.161093 | orchestrator | Sunday 01 June 2025 23:58:26 +0000 (0:00:02.993) 0:00:17.785 *********** 2025-06-01 23:59:20.161208 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-01 23:59:20.161226 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-01 23:59:20.161237 | orchestrator | 2025-06-01 23:59:20.161248 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-01 23:59:20.161259 | orchestrator | Sunday 01 June 2025 23:58:29 +0000 (0:00:03.475) 0:00:21.261 *********** 2025-06-01 23:59:20.161270 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-01 23:59:20.161281 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-01 23:59:20.161292 | orchestrator | 2025-06-01 23:59:20.161303 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-01 23:59:20.161314 | orchestrator | Sunday 01 June 2025 23:58:36 +0000 (0:00:06.133) 0:00:27.395 *********** 2025-06-01 23:59:20.161325 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-01 23:59:20.161336 | orchestrator | 2025-06-01 23:59:20.161347 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:59:20.161358 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161369 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161381 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161392 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161421 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161433 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161444 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.161455 | orchestrator | 2025-06-01 23:59:20.161466 | orchestrator | 2025-06-01 23:59:20.161477 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:59:20.161495 | orchestrator | Sunday 01 June 2025 23:58:42 +0000 (0:00:05.901) 0:00:33.296 *********** 2025-06-01 23:59:20.161506 | orchestrator | =============================================================================== 2025-06-01 23:59:20.161517 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.84s 2025-06-01 23:59:20.161528 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.13s 2025-06-01 23:59:20.161539 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.90s 2025-06-01 23:59:20.161550 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.78s 2025-06-01 23:59:20.161561 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.48s 2025-06-01 23:59:20.161572 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.99s 2025-06-01 23:59:20.161582 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.68s 2025-06-01 23:59:20.161593 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.12s 2025-06-01 23:59:20.161604 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2025-06-01 23:59:20.161615 | orchestrator | 2025-06-01 23:59:20.161625 | orchestrator | 2025-06-01 23:59:20.161636 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-01 23:59:20.161657 | orchestrator | 2025-06-01 23:59:20.161668 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-01 23:59:20.161679 | orchestrator | Sunday 01 June 2025 23:58:01 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-06-01 23:59:20.161690 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.161701 | orchestrator | 2025-06-01 23:59:20.161711 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-01 23:59:20.161722 | orchestrator | Sunday 01 June 2025 23:58:02 +0000 (0:00:01.739) 0:00:02.016 *********** 2025-06-01 23:59:20.161733 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.161944 | orchestrator | 2025-06-01 23:59:20.161967 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-01 23:59:20.161979 | orchestrator | Sunday 01 June 2025 23:58:03 +0000 (0:00:01.022) 0:00:03.039 *********** 2025-06-01 23:59:20.161992 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162166 | orchestrator | 2025-06-01 23:59:20.162182 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-01 23:59:20.162194 | orchestrator | Sunday 01 June 2025 23:58:05 +0000 (0:00:01.153) 0:00:04.193 *********** 2025-06-01 23:59:20.162206 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162217 | orchestrator | 2025-06-01 23:59:20.162228 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-01 23:59:20.162239 | orchestrator | Sunday 01 June 2025 23:58:06 +0000 (0:00:01.098) 0:00:05.292 *********** 2025-06-01 23:59:20.162250 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162261 | orchestrator | 2025-06-01 23:59:20.162271 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-01 23:59:20.162282 | orchestrator | Sunday 01 June 2025 23:58:07 +0000 (0:00:01.438) 0:00:06.731 *********** 2025-06-01 23:59:20.162293 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162304 | orchestrator | 2025-06-01 23:59:20.162315 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-01 23:59:20.162326 | orchestrator | Sunday 01 June 2025 23:58:08 +0000 (0:00:01.023) 0:00:07.754 *********** 2025-06-01 23:59:20.162337 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162347 | orchestrator | 2025-06-01 23:59:20.162358 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-01 23:59:20.162369 | orchestrator | Sunday 01 June 2025 23:58:10 +0000 (0:00:02.043) 0:00:09.797 *********** 2025-06-01 23:59:20.162380 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162391 | orchestrator | 2025-06-01 23:59:20.162401 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-01 23:59:20.162412 | orchestrator | Sunday 01 June 2025 23:58:11 +0000 (0:00:01.089) 0:00:10.887 *********** 2025-06-01 23:59:20.162423 | orchestrator | changed: [testbed-manager] 2025-06-01 23:59:20.162434 | orchestrator | 2025-06-01 23:59:20.162445 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-01 23:59:20.162455 | orchestrator | Sunday 01 June 2025 23:58:53 +0000 (0:00:41.453) 0:00:52.340 *********** 2025-06-01 23:59:20.162466 | orchestrator | skipping: [testbed-manager] 2025-06-01 23:59:20.162477 | orchestrator | 2025-06-01 23:59:20.162488 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 23:59:20.162499 | orchestrator | 2025-06-01 23:59:20.162510 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 23:59:20.162521 | orchestrator | Sunday 01 June 2025 23:58:53 +0000 (0:00:00.175) 0:00:52.516 *********** 2025-06-01 23:59:20.162532 | orchestrator | changed: [testbed-node-0] 2025-06-01 23:59:20.162543 | orchestrator | 2025-06-01 23:59:20.162554 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 23:59:20.162565 | orchestrator | 2025-06-01 23:59:20.162576 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 23:59:20.162586 | orchestrator | Sunday 01 June 2025 23:59:05 +0000 (0:00:11.655) 0:01:04.172 *********** 2025-06-01 23:59:20.162598 | orchestrator | changed: [testbed-node-1] 2025-06-01 23:59:20.162620 | orchestrator | 2025-06-01 23:59:20.162646 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-01 23:59:20.162657 | orchestrator | 2025-06-01 23:59:20.162668 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-01 23:59:20.162679 | orchestrator | Sunday 01 June 2025 23:59:16 +0000 (0:00:11.247) 0:01:15.420 *********** 2025-06-01 23:59:20.162690 | orchestrator | changed: [testbed-node-2] 2025-06-01 23:59:20.162701 | orchestrator | 2025-06-01 23:59:20.162712 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-01 23:59:20.162723 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-01 23:59:20.162742 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.162754 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.162765 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-01 23:59:20.162776 | orchestrator | 2025-06-01 23:59:20.162787 | orchestrator | 2025-06-01 23:59:20.163145 | orchestrator | 2025-06-01 23:59:20.163168 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-01 23:59:20.163183 | orchestrator | Sunday 01 June 2025 23:59:17 +0000 (0:00:01.098) 0:01:16.518 *********** 2025-06-01 23:59:20.163195 | orchestrator | =============================================================================== 2025-06-01 23:59:20.163206 | orchestrator | Create admin user ------------------------------------------------------ 41.45s 2025-06-01 23:59:20.163217 | orchestrator | Restart ceph manager service ------------------------------------------- 24.00s 2025-06-01 23:59:20.163228 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2025-06-01 23:59:20.163239 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.74s 2025-06-01 23:59:20.163250 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.44s 2025-06-01 23:59:20.163260 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.15s 2025-06-01 23:59:20.163271 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.10s 2025-06-01 23:59:20.163282 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.09s 2025-06-01 23:59:20.163293 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.02s 2025-06-01 23:59:20.163303 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2025-06-01 23:59:20.163314 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2025-06-01 23:59:20.163326 | orchestrator | 2025-06-01 23:59:20 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:20.163338 | orchestrator | 2025-06-01 23:59:20 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:20.163348 | orchestrator | 2025-06-01 23:59:20 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:20.163359 | orchestrator | 2025-06-01 23:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:23.196565 | orchestrator | 2025-06-01 23:59:23 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:23.196678 | orchestrator | 2025-06-01 23:59:23 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:23.197468 | orchestrator | 2025-06-01 23:59:23 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:23.197959 | orchestrator | 2025-06-01 23:59:23 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:23.198011 | orchestrator | 2025-06-01 23:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:26.246775 | orchestrator | 2025-06-01 23:59:26 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:26.247099 | orchestrator | 2025-06-01 23:59:26 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:26.247993 | orchestrator | 2025-06-01 23:59:26 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:26.248663 | orchestrator | 2025-06-01 23:59:26 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:26.248935 | orchestrator | 2025-06-01 23:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:29.289890 | orchestrator | 2025-06-01 23:59:29 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:29.290122 | orchestrator | 2025-06-01 23:59:29 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:29.291871 | orchestrator | 2025-06-01 23:59:29 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:29.292529 | orchestrator | 2025-06-01 23:59:29 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:29.292554 | orchestrator | 2025-06-01 23:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:32.330481 | orchestrator | 2025-06-01 23:59:32 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:32.330610 | orchestrator | 2025-06-01 23:59:32 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:32.331222 | orchestrator | 2025-06-01 23:59:32 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:32.333455 | orchestrator | 2025-06-01 23:59:32 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:32.333520 | orchestrator | 2025-06-01 23:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:35.364528 | orchestrator | 2025-06-01 23:59:35 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:35.364624 | orchestrator | 2025-06-01 23:59:35 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:35.365144 | orchestrator | 2025-06-01 23:59:35 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:35.365830 | orchestrator | 2025-06-01 23:59:35 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:35.365853 | orchestrator | 2025-06-01 23:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:38.417665 | orchestrator | 2025-06-01 23:59:38 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:38.420510 | orchestrator | 2025-06-01 23:59:38 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:38.424816 | orchestrator | 2025-06-01 23:59:38 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:38.424860 | orchestrator | 2025-06-01 23:59:38 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:38.424881 | orchestrator | 2025-06-01 23:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:41.461191 | orchestrator | 2025-06-01 23:59:41 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:41.461298 | orchestrator | 2025-06-01 23:59:41 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:41.461559 | orchestrator | 2025-06-01 23:59:41 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:41.462182 | orchestrator | 2025-06-01 23:59:41 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:41.462219 | orchestrator | 2025-06-01 23:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:44.489312 | orchestrator | 2025-06-01 23:59:44 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:44.489414 | orchestrator | 2025-06-01 23:59:44 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:44.489985 | orchestrator | 2025-06-01 23:59:44 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:44.490770 | orchestrator | 2025-06-01 23:59:44 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:44.490831 | orchestrator | 2025-06-01 23:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:47.526984 | orchestrator | 2025-06-01 23:59:47 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:47.527121 | orchestrator | 2025-06-01 23:59:47 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:47.527149 | orchestrator | 2025-06-01 23:59:47 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:47.528551 | orchestrator | 2025-06-01 23:59:47 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:47.528571 | orchestrator | 2025-06-01 23:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:50.562598 | orchestrator | 2025-06-01 23:59:50 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:50.562698 | orchestrator | 2025-06-01 23:59:50 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:50.564049 | orchestrator | 2025-06-01 23:59:50 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:50.564141 | orchestrator | 2025-06-01 23:59:50 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:50.564156 | orchestrator | 2025-06-01 23:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:53.598774 | orchestrator | 2025-06-01 23:59:53 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:53.599408 | orchestrator | 2025-06-01 23:59:53 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:53.601162 | orchestrator | 2025-06-01 23:59:53 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:53.601961 | orchestrator | 2025-06-01 23:59:53 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:53.602001 | orchestrator | 2025-06-01 23:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:56.662746 | orchestrator | 2025-06-01 23:59:56 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:56.664473 | orchestrator | 2025-06-01 23:59:56 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:56.667376 | orchestrator | 2025-06-01 23:59:56 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:56.668571 | orchestrator | 2025-06-01 23:59:56 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:56.668715 | orchestrator | 2025-06-01 23:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-01 23:59:59.705479 | orchestrator | 2025-06-01 23:59:59 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-01 23:59:59.706001 | orchestrator | 2025-06-01 23:59:59 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-01 23:59:59.707270 | orchestrator | 2025-06-01 23:59:59 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-01 23:59:59.708144 | orchestrator | 2025-06-01 23:59:59 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-01 23:59:59.708206 | orchestrator | 2025-06-01 23:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:02.746496 | orchestrator | 2025-06-02 00:00:02 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:02.747251 | orchestrator | 2025-06-02 00:00:02 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:02.748408 | orchestrator | 2025-06-02 00:00:02 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:02.749341 | orchestrator | 2025-06-02 00:00:02 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:02.749502 | orchestrator | 2025-06-02 00:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:05.798446 | orchestrator | 2025-06-02 00:00:05 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:05.801970 | orchestrator | 2025-06-02 00:00:05 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:05.805826 | orchestrator | 2025-06-02 00:00:05 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:05.806079 | orchestrator | 2025-06-02 00:00:05 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:05.806094 | orchestrator | 2025-06-02 00:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:08.864127 | orchestrator | 2025-06-02 00:00:08 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:08.865028 | orchestrator | 2025-06-02 00:00:08 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:08.870292 | orchestrator | 2025-06-02 00:00:08 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:08.871610 | orchestrator | 2025-06-02 00:00:08 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:08.871637 | orchestrator | 2025-06-02 00:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:11.930204 | orchestrator | 2025-06-02 00:00:11 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:11.932588 | orchestrator | 2025-06-02 00:00:11 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:11.934891 | orchestrator | 2025-06-02 00:00:11 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:11.937338 | orchestrator | 2025-06-02 00:00:11 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:11.937374 | orchestrator | 2025-06-02 00:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:14.980644 | orchestrator | 2025-06-02 00:00:14 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:14.980810 | orchestrator | 2025-06-02 00:00:14 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:14.983006 | orchestrator | 2025-06-02 00:00:14 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:14.983300 | orchestrator | 2025-06-02 00:00:14 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:14.983416 | orchestrator | 2025-06-02 00:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:18.030526 | orchestrator | 2025-06-02 00:00:18 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:18.031691 | orchestrator | 2025-06-02 00:00:18 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:18.033244 | orchestrator | 2025-06-02 00:00:18 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:18.034972 | orchestrator | 2025-06-02 00:00:18 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:18.035003 | orchestrator | 2025-06-02 00:00:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:21.079094 | orchestrator | 2025-06-02 00:00:21 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:21.081551 | orchestrator | 2025-06-02 00:00:21 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:21.083979 | orchestrator | 2025-06-02 00:00:21 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:21.085267 | orchestrator | 2025-06-02 00:00:21 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:21.086210 | orchestrator | 2025-06-02 00:00:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:24.134090 | orchestrator | 2025-06-02 00:00:24 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:24.135278 | orchestrator | 2025-06-02 00:00:24 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:24.137732 | orchestrator | 2025-06-02 00:00:24 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:24.139491 | orchestrator | 2025-06-02 00:00:24 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:24.139519 | orchestrator | 2025-06-02 00:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:27.188920 | orchestrator | 2025-06-02 00:00:27 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:27.190279 | orchestrator | 2025-06-02 00:00:27 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:27.191945 | orchestrator | 2025-06-02 00:00:27 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:27.193191 | orchestrator | 2025-06-02 00:00:27 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:27.193227 | orchestrator | 2025-06-02 00:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:30.253898 | orchestrator | 2025-06-02 00:00:30 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:30.254015 | orchestrator | 2025-06-02 00:00:30 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:30.254977 | orchestrator | 2025-06-02 00:00:30 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:30.257230 | orchestrator | 2025-06-02 00:00:30 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:30.257263 | orchestrator | 2025-06-02 00:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:33.302389 | orchestrator | 2025-06-02 00:00:33 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:33.302816 | orchestrator | 2025-06-02 00:00:33 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:33.305022 | orchestrator | 2025-06-02 00:00:33 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:33.306254 | orchestrator | 2025-06-02 00:00:33 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:33.306582 | orchestrator | 2025-06-02 00:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:36.340427 | orchestrator | 2025-06-02 00:00:36 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:36.340531 | orchestrator | 2025-06-02 00:00:36 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:36.340548 | orchestrator | 2025-06-02 00:00:36 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:36.340560 | orchestrator | 2025-06-02 00:00:36 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:36.340572 | orchestrator | 2025-06-02 00:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:39.381452 | orchestrator | 2025-06-02 00:00:39 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:39.381558 | orchestrator | 2025-06-02 00:00:39 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:39.383186 | orchestrator | 2025-06-02 00:00:39 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:39.383221 | orchestrator | 2025-06-02 00:00:39 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:39.383234 | orchestrator | 2025-06-02 00:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:42.421363 | orchestrator | 2025-06-02 00:00:42 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:42.424160 | orchestrator | 2025-06-02 00:00:42 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:42.427548 | orchestrator | 2025-06-02 00:00:42 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:42.432311 | orchestrator | 2025-06-02 00:00:42 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:42.432381 | orchestrator | 2025-06-02 00:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:45.478694 | orchestrator | 2025-06-02 00:00:45 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:45.478928 | orchestrator | 2025-06-02 00:00:45 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:45.481631 | orchestrator | 2025-06-02 00:00:45 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:45.482821 | orchestrator | 2025-06-02 00:00:45 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:45.484066 | orchestrator | 2025-06-02 00:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:48.535257 | orchestrator | 2025-06-02 00:00:48 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:48.535369 | orchestrator | 2025-06-02 00:00:48 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:48.539316 | orchestrator | 2025-06-02 00:00:48 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:48.539392 | orchestrator | 2025-06-02 00:00:48 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:48.539894 | orchestrator | 2025-06-02 00:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:51.601929 | orchestrator | 2025-06-02 00:00:51 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:51.602248 | orchestrator | 2025-06-02 00:00:51 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:51.602635 | orchestrator | 2025-06-02 00:00:51 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:51.603875 | orchestrator | 2025-06-02 00:00:51 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:51.603901 | orchestrator | 2025-06-02 00:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:54.645413 | orchestrator | 2025-06-02 00:00:54 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:54.645544 | orchestrator | 2025-06-02 00:00:54 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:54.645563 | orchestrator | 2025-06-02 00:00:54 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:54.646176 | orchestrator | 2025-06-02 00:00:54 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:54.646209 | orchestrator | 2025-06-02 00:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:00:57.696609 | orchestrator | 2025-06-02 00:00:57 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:00:57.697494 | orchestrator | 2025-06-02 00:00:57 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:00:57.699066 | orchestrator | 2025-06-02 00:00:57 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:00:57.700503 | orchestrator | 2025-06-02 00:00:57 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:00:57.700550 | orchestrator | 2025-06-02 00:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:00.737126 | orchestrator | 2025-06-02 00:01:00 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:00.737199 | orchestrator | 2025-06-02 00:01:00 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:01:00.738112 | orchestrator | 2025-06-02 00:01:00 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:00.740878 | orchestrator | 2025-06-02 00:01:00 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:00.740911 | orchestrator | 2025-06-02 00:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:03.788636 | orchestrator | 2025-06-02 00:01:03 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:03.789646 | orchestrator | 2025-06-02 00:01:03 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:01:03.795700 | orchestrator | 2025-06-02 00:01:03 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:03.797586 | orchestrator | 2025-06-02 00:01:03 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:03.798059 | orchestrator | 2025-06-02 00:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:06.842535 | orchestrator | 2025-06-02 00:01:06 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:06.845895 | orchestrator | 2025-06-02 00:01:06 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:01:06.847478 | orchestrator | 2025-06-02 00:01:06 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:06.850145 | orchestrator | 2025-06-02 00:01:06 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:06.850193 | orchestrator | 2025-06-02 00:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:09.896819 | orchestrator | 2025-06-02 00:01:09 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:09.897338 | orchestrator | 2025-06-02 00:01:09 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:01:09.899085 | orchestrator | 2025-06-02 00:01:09 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:09.899440 | orchestrator | 2025-06-02 00:01:09 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:09.899457 | orchestrator | 2025-06-02 00:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:12.955951 | orchestrator | 2025-06-02 00:01:12 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:12.958106 | orchestrator | 2025-06-02 00:01:12 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state STARTED 2025-06-02 00:01:12.960837 | orchestrator | 2025-06-02 00:01:12 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:12.962251 | orchestrator | 2025-06-02 00:01:12 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:12.962293 | orchestrator | 2025-06-02 00:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:16.021715 | orchestrator | 2025-06-02 00:01:16 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:16.023320 | orchestrator | 2025-06-02 00:01:16 | INFO  | Task 57dd1b5a-6653-4e97-9804-24f71a44d67a is in state SUCCESS 2025-06-02 00:01:16.025225 | orchestrator | 2025-06-02 00:01:16.025286 | orchestrator | 2025-06-02 00:01:16.025306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:01:16.025325 | orchestrator | 2025-06-02 00:01:16.025342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:01:16.025360 | orchestrator | Sunday 01 June 2025 23:58:08 +0000 (0:00:00.399) 0:00:00.399 *********** 2025-06-02 00:01:16.025377 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:01:16.025395 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:01:16.025412 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:01:16.025429 | orchestrator | 2025-06-02 00:01:16.025447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:01:16.025464 | orchestrator | Sunday 01 June 2025 23:58:09 +0000 (0:00:00.318) 0:00:00.717 *********** 2025-06-02 00:01:16.025482 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 00:01:16.025602 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 00:01:16.025622 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 00:01:16.025662 | orchestrator | 2025-06-02 00:01:16.025682 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 00:01:16.025700 | orchestrator | 2025-06-02 00:01:16.025719 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:01:16.025738 | orchestrator | Sunday 01 June 2025 23:58:09 +0000 (0:00:00.429) 0:00:01.147 *********** 2025-06-02 00:01:16.025787 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:01:16.025807 | orchestrator | 2025-06-02 00:01:16.025846 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 00:01:16.025863 | orchestrator | Sunday 01 June 2025 23:58:10 +0000 (0:00:00.593) 0:00:01.741 *********** 2025-06-02 00:01:16.025881 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 00:01:16.025898 | orchestrator | 2025-06-02 00:01:16.025917 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 00:01:16.025936 | orchestrator | Sunday 01 June 2025 23:58:13 +0000 (0:00:03.069) 0:00:04.810 *********** 2025-06-02 00:01:16.025985 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 00:01:16.026004 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 00:01:16.026088 | orchestrator | 2025-06-02 00:01:16.026121 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 00:01:16.026149 | orchestrator | Sunday 01 June 2025 23:58:20 +0000 (0:00:06.897) 0:00:11.708 *********** 2025-06-02 00:01:16.026165 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 00:01:16.026181 | orchestrator | 2025-06-02 00:01:16.026196 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 00:01:16.026212 | orchestrator | Sunday 01 June 2025 23:58:23 +0000 (0:00:03.510) 0:00:15.219 *********** 2025-06-02 00:01:16.026229 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 00:01:16.026244 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 00:01:16.026260 | orchestrator | 2025-06-02 00:01:16.026276 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 00:01:16.026292 | orchestrator | Sunday 01 June 2025 23:58:27 +0000 (0:00:03.961) 0:00:19.180 *********** 2025-06-02 00:01:16.026307 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 00:01:16.026322 | orchestrator | 2025-06-02 00:01:16.026337 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 00:01:16.026352 | orchestrator | Sunday 01 June 2025 23:58:31 +0000 (0:00:03.460) 0:00:22.641 *********** 2025-06-02 00:01:16.026367 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 00:01:16.026383 | orchestrator | 2025-06-02 00:01:16.026398 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 00:01:16.026414 | orchestrator | Sunday 01 June 2025 23:58:34 +0000 (0:00:03.902) 0:00:26.543 *********** 2025-06-02 00:01:16.026489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.026524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.026557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.026577 | orchestrator | 2025-06-02 00:01:16.026595 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:01:16.026612 | orchestrator | Sunday 01 June 2025 23:58:42 +0000 (0:00:07.010) 0:00:33.554 *********** 2025-06-02 00:01:16.026628 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:01:16.026644 | orchestrator | 2025-06-02 00:01:16.026669 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 00:01:16.026685 | orchestrator | Sunday 01 June 2025 23:58:42 +0000 (0:00:00.578) 0:00:34.132 *********** 2025-06-02 00:01:16.026701 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:16.026717 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.026733 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:16.026771 | orchestrator | 2025-06-02 00:01:16.026789 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 00:01:16.026805 | orchestrator | Sunday 01 June 2025 23:58:46 +0000 (0:00:03.547) 0:00:37.679 *********** 2025-06-02 00:01:16.026830 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:01:16.026846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:01:16.026862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:01:16.026877 | orchestrator | 2025-06-02 00:01:16.026893 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 00:01:16.026908 | orchestrator | Sunday 01 June 2025 23:58:47 +0000 (0:00:01.479) 0:00:39.159 *********** 2025-06-02 00:01:16.026923 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:01:16.026946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:01:16.026962 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:01:16.026978 | orchestrator | 2025-06-02 00:01:16.026993 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 00:01:16.027008 | orchestrator | Sunday 01 June 2025 23:58:48 +0000 (0:00:01.098) 0:00:40.258 *********** 2025-06-02 00:01:16.027051 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:01:16.027068 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:01:16.027083 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:01:16.027098 | orchestrator | 2025-06-02 00:01:16.027114 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 00:01:16.027129 | orchestrator | Sunday 01 June 2025 23:58:49 +0000 (0:00:00.842) 0:00:41.100 *********** 2025-06-02 00:01:16.027144 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.027159 | orchestrator | 2025-06-02 00:01:16.027174 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 00:01:16.027189 | orchestrator | Sunday 01 June 2025 23:58:49 +0000 (0:00:00.136) 0:00:41.236 *********** 2025-06-02 00:01:16.027204 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.027220 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.027235 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.027251 | orchestrator | 2025-06-02 00:01:16.027266 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:01:16.027281 | orchestrator | Sunday 01 June 2025 23:58:50 +0000 (0:00:00.377) 0:00:41.613 *********** 2025-06-02 00:01:16.027296 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:01:16.027311 | orchestrator | 2025-06-02 00:01:16.027326 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 00:01:16.027341 | orchestrator | Sunday 01 June 2025 23:58:50 +0000 (0:00:00.565) 0:00:42.179 *********** 2025-06-02 00:01:16.027369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.027404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.027422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.027448 | orchestrator | 2025-06-02 00:01:16.027464 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 00:01:16.027479 | orchestrator | Sunday 01 June 2025 23:58:57 +0000 (0:00:06.404) 0:00:48.584 *********** 2025-06-02 00:01:16.027533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:01:16.027552 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.027569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:01:16.027586 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.027612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:01:16.027636 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.027653 | orchestrator | 2025-06-02 00:01:16.027669 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 00:01:16.027685 | orchestrator | Sunday 01 June 2025 23:58:59 +0000 (0:00:02.905) 0:00:51.489 *********** 2025-06-02 00:01:16.027709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:01:16.027728 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.027778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:01:16.027809 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.027833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 00:01:16.027851 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.027868 | orchestrator | 2025-06-02 00:01:16.027885 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 00:01:16.027901 | orchestrator | Sunday 01 June 2025 23:59:03 +0000 (0:00:03.877) 0:00:55.367 *********** 2025-06-02 00:01:16.027917 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.027934 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.027951 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.027967 | orchestrator | 2025-06-02 00:01:16.027983 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 00:01:16.027997 | orchestrator | Sunday 01 June 2025 23:59:07 +0000 (0:00:03.611) 0:00:58.978 *********** 2025-06-02 00:01:16.028021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.028064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.028082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.028110 | orchestrator | 2025-06-02 00:01:16.028126 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 00:01:16.028142 | orchestrator | Sunday 01 June 2025 23:59:12 +0000 (0:00:05.175) 0:01:04.153 *********** 2025-06-02 00:01:16.028159 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.028174 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:16.028190 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:16.028205 | orchestrator | 2025-06-02 00:01:16.028221 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 00:01:16.028237 | orchestrator | Sunday 01 June 2025 23:59:21 +0000 (0:00:09.114) 0:01:13.268 *********** 2025-06-02 00:01:16.028253 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028269 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028285 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.028302 | orchestrator | 2025-06-02 00:01:16.028317 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 00:01:16.028342 | orchestrator | Sunday 01 June 2025 23:59:27 +0000 (0:00:05.975) 0:01:19.243 *********** 2025-06-02 00:01:16.028359 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028374 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028390 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.028405 | orchestrator | 2025-06-02 00:01:16.028422 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 00:01:16.028438 | orchestrator | Sunday 01 June 2025 23:59:33 +0000 (0:00:06.139) 0:01:25.382 *********** 2025-06-02 00:01:16.028454 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028469 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028485 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.028501 | orchestrator | 2025-06-02 00:01:16.028517 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 00:01:16.028535 | orchestrator | Sunday 01 June 2025 23:59:38 +0000 (0:00:05.139) 0:01:30.522 *********** 2025-06-02 00:01:16.028550 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028566 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028583 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.028599 | orchestrator | 2025-06-02 00:01:16.028614 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 00:01:16.028624 | orchestrator | Sunday 01 June 2025 23:59:43 +0000 (0:00:04.031) 0:01:34.553 *********** 2025-06-02 00:01:16.028634 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028644 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028653 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.028663 | orchestrator | 2025-06-02 00:01:16.028679 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 00:01:16.028689 | orchestrator | Sunday 01 June 2025 23:59:43 +0000 (0:00:00.276) 0:01:34.829 *********** 2025-06-02 00:01:16.028699 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 00:01:16.028709 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028719 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 00:01:16.028735 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028745 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 00:01:16.028839 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.028856 | orchestrator | 2025-06-02 00:01:16.028867 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 00:01:16.028876 | orchestrator | Sunday 01 June 2025 23:59:49 +0000 (0:00:06.574) 0:01:41.404 *********** 2025-06-02 00:01:16.028888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.028916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.028936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 00:01:16.028947 | orchestrator | 2025-06-02 00:01:16.028957 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 00:01:16.028967 | orchestrator | Sunday 01 June 2025 23:59:56 +0000 (0:00:06.265) 0:01:47.669 *********** 2025-06-02 00:01:16.028976 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:16.028986 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:16.028996 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:16.029005 | orchestrator | 2025-06-02 00:01:16.029015 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 00:01:16.029024 | orchestrator | Sunday 01 June 2025 23:59:56 +0000 (0:00:00.330) 0:01:47.999 *********** 2025-06-02 00:01:16.029034 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.029044 | orchestrator | 2025-06-02 00:01:16.029053 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 00:01:16.029063 | orchestrator | Sunday 01 June 2025 23:59:58 +0000 (0:00:02.059) 0:01:50.059 *********** 2025-06-02 00:01:16.029073 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.029081 | orchestrator | 2025-06-02 00:01:16.029089 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 00:01:16.029097 | orchestrator | Monday 02 June 2025 00:00:00 +0000 (0:00:02.267) 0:01:52.326 *********** 2025-06-02 00:01:16.029105 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.029113 | orchestrator | 2025-06-02 00:01:16.029121 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 00:01:16.029128 | orchestrator | Monday 02 June 2025 00:00:03 +0000 (0:00:02.569) 0:01:54.896 *********** 2025-06-02 00:01:16.029136 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.029144 | orchestrator | 2025-06-02 00:01:16.029152 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 00:01:16.029160 | orchestrator | Monday 02 June 2025 00:00:32 +0000 (0:00:28.796) 0:02:23.692 *********** 2025-06-02 00:01:16.029168 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.029176 | orchestrator | 2025-06-02 00:01:16.029189 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 00:01:16.029197 | orchestrator | Monday 02 June 2025 00:00:34 +0000 (0:00:02.435) 0:02:26.128 *********** 2025-06-02 00:01:16.029205 | orchestrator | 2025-06-02 00:01:16.029213 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 00:01:16.029226 | orchestrator | Monday 02 June 2025 00:00:34 +0000 (0:00:00.064) 0:02:26.193 *********** 2025-06-02 00:01:16.029234 | orchestrator | 2025-06-02 00:01:16.029242 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 00:01:16.029250 | orchestrator | Monday 02 June 2025 00:00:34 +0000 (0:00:00.063) 0:02:26.256 *********** 2025-06-02 00:01:16.029258 | orchestrator | 2025-06-02 00:01:16.029265 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 00:01:16.029273 | orchestrator | Monday 02 June 2025 00:00:34 +0000 (0:00:00.064) 0:02:26.320 *********** 2025-06-02 00:01:16.029281 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:16.029289 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:16.029297 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:16.029305 | orchestrator | 2025-06-02 00:01:16.029313 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:01:16.029322 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:01:16.029335 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:01:16.029343 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:01:16.029351 | orchestrator | 2025-06-02 00:01:16.029359 | orchestrator | 2025-06-02 00:01:16.029367 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:01:16.029375 | orchestrator | Monday 02 June 2025 00:01:15 +0000 (0:00:40.307) 0:03:06.627 *********** 2025-06-02 00:01:16.029383 | orchestrator | =============================================================================== 2025-06-02 00:01:16.029391 | orchestrator | glance : Restart glance-api container ---------------------------------- 40.31s 2025-06-02 00:01:16.029399 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.80s 2025-06-02 00:01:16.029406 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.11s 2025-06-02 00:01:16.029414 | orchestrator | glance : Ensuring config directories exist ------------------------------ 7.01s 2025-06-02 00:01:16.029422 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.90s 2025-06-02 00:01:16.029430 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.57s 2025-06-02 00:01:16.029438 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.40s 2025-06-02 00:01:16.029446 | orchestrator | glance : Check glance containers ---------------------------------------- 6.27s 2025-06-02 00:01:16.029453 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.14s 2025-06-02 00:01:16.029461 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.98s 2025-06-02 00:01:16.029469 | orchestrator | glance : Copying over config.json files for services -------------------- 5.18s 2025-06-02 00:01:16.029477 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.14s 2025-06-02 00:01:16.029485 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.03s 2025-06-02 00:01:16.029493 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.96s 2025-06-02 00:01:16.029501 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.90s 2025-06-02 00:01:16.029508 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.88s 2025-06-02 00:01:16.029516 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.61s 2025-06-02 00:01:16.029524 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.55s 2025-06-02 00:01:16.029532 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.51s 2025-06-02 00:01:16.029540 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.46s 2025-06-02 00:01:16.029553 | orchestrator | 2025-06-02 00:01:16 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:16.029561 | orchestrator | 2025-06-02 00:01:16 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:16.029569 | orchestrator | 2025-06-02 00:01:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:19.067456 | orchestrator | 2025-06-02 00:01:19 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:19.070217 | orchestrator | 2025-06-02 00:01:19 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:19.072837 | orchestrator | 2025-06-02 00:01:19 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:19.074463 | orchestrator | 2025-06-02 00:01:19 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:19.074501 | orchestrator | 2025-06-02 00:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:22.117856 | orchestrator | 2025-06-02 00:01:22 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:22.117974 | orchestrator | 2025-06-02 00:01:22 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:22.118782 | orchestrator | 2025-06-02 00:01:22 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:22.120018 | orchestrator | 2025-06-02 00:01:22 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:22.120085 | orchestrator | 2025-06-02 00:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:25.165987 | orchestrator | 2025-06-02 00:01:25 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state STARTED 2025-06-02 00:01:25.170145 | orchestrator | 2025-06-02 00:01:25 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:25.173353 | orchestrator | 2025-06-02 00:01:25 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:25.175322 | orchestrator | 2025-06-02 00:01:25 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:25.175521 | orchestrator | 2025-06-02 00:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:28.223965 | orchestrator | 2025-06-02 00:01:28 | INFO  | Task de5d431d-ed3a-445c-8431-350e1af1da4c is in state SUCCESS 2025-06-02 00:01:28.225279 | orchestrator | 2025-06-02 00:01:28.225321 | orchestrator | 2025-06-02 00:01:28.225333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:01:28.225344 | orchestrator | 2025-06-02 00:01:28.225354 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:01:28.225364 | orchestrator | Sunday 01 June 2025 23:58:01 +0000 (0:00:00.297) 0:00:00.297 *********** 2025-06-02 00:01:28.225375 | orchestrator | ok: [testbed-manager] 2025-06-02 00:01:28.225386 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:01:28.225396 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:01:28.225406 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:01:28.225416 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:01:28.225425 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:01:28.225435 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:01:28.225445 | orchestrator | 2025-06-02 00:01:28.225455 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:01:28.225464 | orchestrator | Sunday 01 June 2025 23:58:01 +0000 (0:00:00.870) 0:00:01.168 *********** 2025-06-02 00:01:28.225474 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225485 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225494 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225556 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225568 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225577 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225587 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 00:01:28.225596 | orchestrator | 2025-06-02 00:01:28.225606 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 00:01:28.225615 | orchestrator | 2025-06-02 00:01:28.225625 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 00:01:28.225635 | orchestrator | Sunday 01 June 2025 23:58:02 +0000 (0:00:00.767) 0:00:01.935 *********** 2025-06-02 00:01:28.225645 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:01:28.225656 | orchestrator | 2025-06-02 00:01:28.225666 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 00:01:28.225676 | orchestrator | Sunday 01 June 2025 23:58:04 +0000 (0:00:01.663) 0:00:03.599 *********** 2025-06-02 00:01:28.225689 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:01:28.225703 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.225717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.225776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.226227 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.226296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.226355 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:01:28.226402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.226485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226537 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.226621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.226909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.226920 | orchestrator | 2025-06-02 00:01:28.226930 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 00:01:28.226941 | orchestrator | Sunday 01 June 2025 23:58:08 +0000 (0:00:03.924) 0:00:07.524 *********** 2025-06-02 00:01:28.226951 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:01:28.226962 | orchestrator | 2025-06-02 00:01:28.226972 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 00:01:28.226982 | orchestrator | Sunday 01 June 2025 23:58:10 +0000 (0:00:01.708) 0:00:09.232 *********** 2025-06-02 00:01:28.226992 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:01:28.227003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227024 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227097 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.227117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227195 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227259 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:01:28.227310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227364 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.227391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.227431 | orchestrator | 2025-06-02 00:01:28.227441 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 00:01:28.227451 | orchestrator | Sunday 01 June 2025 23:58:15 +0000 (0:00:05.847) 0:00:15.080 *********** 2025-06-02 00:01:28.227461 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:01:28.227471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.228370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:01:28.228469 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.228493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228541 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.228551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.228615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228690 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.228716 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.228726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.228737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.228909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.228926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228947 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.228957 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.228967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.228985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.228995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229005 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.229015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229082 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.229091 | orchestrator | 2025-06-02 00:01:28.229101 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 00:01:28.229112 | orchestrator | Sunday 01 June 2025 23:58:17 +0000 (0:00:01.716) 0:00:16.796 *********** 2025-06-02 00:01:28.229122 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 00:01:28.229139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 00:01:28.229177 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229272 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.229283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229375 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.229385 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.229395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 00:01:28.229445 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.229485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229524 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.229534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229568 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.229581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 00:01:28.229593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 00:01:28.229653 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.229665 | orchestrator | 2025-06-02 00:01:28.229676 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 00:01:28.229694 | orchestrator | Sunday 01 June 2025 23:58:19 +0000 (0:00:01.906) 0:00:18.703 *********** 2025-06-02 00:01:28.229707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229718 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:01:28.229728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.229843 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.229863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.229874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.229884 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.229895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.229905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.229946 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.229964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.229975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.229985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.229995 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:01:28.230006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.230047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.230098 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.230110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.230121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.230131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.230141 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.230151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.230161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.230171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.230188 | orchestrator | 2025-06-02 00:01:28.230198 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 00:01:28.230212 | orchestrator | Sunday 01 June 2025 23:58:25 +0000 (0:00:05.761) 0:00:24.464 *********** 2025-06-02 00:01:28.230222 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:01:28.230232 | orchestrator | 2025-06-02 00:01:28.230242 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 00:01:28.230278 | orchestrator | Sunday 01 June 2025 23:58:26 +0000 (0:00:00.939) 0:00:25.404 *********** 2025-06-02 00:01:28.230290 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230301 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230311 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.230321 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230331 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230341 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230393 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230407 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230425 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049817, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5574977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230443 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230461 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230480 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230509 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230577 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230598 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.230616 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230633 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1049804, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230650 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230665 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230684 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230698 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230743 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230788 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230798 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230808 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230835 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230850 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1049760, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5434976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.230889 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230900 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230930 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230948 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230962 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.230998 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231009 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231030 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231046 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231057 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231071 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231108 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231119 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1049763, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5444975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.231130 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231158 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231168 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231183 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231221 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231233 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231253 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231269 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231280 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231290 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231326 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231338 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1049790, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5534976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.231348 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231358 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231375 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231385 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231489 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231552 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231565 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231575 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231593 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231603 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231613 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231623 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231665 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1049771, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5464976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.231677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231703 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231713 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231733 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231827 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231842 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231852 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231880 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1049786, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5494976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.231890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231900 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231930 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231946 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231956 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231966 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231976 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.231986 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232036 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232046 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1049807, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5554976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232056 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232066 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232077 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232097 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232108 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232155 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232165 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232196 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232207 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.232225 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232236 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232246 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1049814, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232256 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232266 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232276 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232286 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.232305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232321 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.232329 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232337 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.232345 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232353 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232361 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.232369 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 00:01:28.232377 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.232385 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1049845, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232394 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1049809, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5564978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232416 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049769, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5454977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232425 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1049785, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232433 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1049751, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5424976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232442 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1049803, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5544977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232450 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1049839, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5634978, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232458 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1049778, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5484977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232466 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1049819, 'dev': 128, 'nlink': 1, 'atime': 1748736131.0, 'mtime': 1748736131.0, 'ctime': 1748816408.5614977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 00:01:28.232481 | orchestrator | 2025-06-02 00:01:28.232492 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 00:01:28.232501 | orchestrator | Sunday 01 June 2025 23:58:50 +0000 (0:00:24.284) 0:00:49.688 *********** 2025-06-02 00:01:28.232513 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:01:28.232522 | orchestrator | 2025-06-02 00:01:28.232530 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 00:01:28.232538 | orchestrator | Sunday 01 June 2025 23:58:51 +0000 (0:00:00.742) 0:00:50.431 *********** 2025-06-02 00:01:28.232546 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232554 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232563 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232578 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232586 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:01:28.232594 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232610 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232626 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232633 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232641 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232649 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232657 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232665 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232673 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232681 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232688 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232704 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232712 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232719 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232727 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232735 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232743 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232768 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232784 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232800 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232807 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.232815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232823 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 00:01:28.232831 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 00:01:28.232844 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 00:01:28.232852 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:01:28.232860 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 00:01:28.232868 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 00:01:28.232876 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 00:01:28.232884 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 00:01:28.232892 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 00:01:28.232900 | orchestrator | 2025-06-02 00:01:28.232907 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 00:01:28.232915 | orchestrator | Sunday 01 June 2025 23:58:54 +0000 (0:00:02.803) 0:00:53.234 *********** 2025-06-02 00:01:28.232923 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:01:28.232931 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.232939 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:01:28.232947 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.232955 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:01:28.232963 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.232971 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:01:28.232978 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.232986 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:01:28.232994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233002 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 00:01:28.233010 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233017 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 00:01:28.233025 | orchestrator | 2025-06-02 00:01:28.233037 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 00:01:28.233045 | orchestrator | Sunday 01 June 2025 23:59:12 +0000 (0:00:18.324) 0:01:11.559 *********** 2025-06-02 00:01:28.233057 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:01:28.233065 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233073 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:01:28.233081 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233089 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:01:28.233097 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233104 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:01:28.233112 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233120 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:01:28.233128 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233135 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 00:01:28.233143 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233151 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 00:01:28.233159 | orchestrator | 2025-06-02 00:01:28.233167 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 00:01:28.233175 | orchestrator | Sunday 01 June 2025 23:59:17 +0000 (0:00:05.368) 0:01:16.927 *********** 2025-06-02 00:01:28.233183 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:01:28.233197 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:01:28.233205 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233212 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233220 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 00:01:28.233228 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:01:28.233236 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233244 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:01:28.233252 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233260 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:01:28.233268 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233276 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 00:01:28.233284 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233291 | orchestrator | 2025-06-02 00:01:28.233299 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 00:01:28.233307 | orchestrator | Sunday 01 June 2025 23:59:20 +0000 (0:00:02.696) 0:01:19.624 *********** 2025-06-02 00:01:28.233315 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:01:28.233323 | orchestrator | 2025-06-02 00:01:28.233331 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 00:01:28.233339 | orchestrator | Sunday 01 June 2025 23:59:21 +0000 (0:00:00.681) 0:01:20.306 *********** 2025-06-02 00:01:28.233347 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.233355 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233363 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233370 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233378 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233386 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233394 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233401 | orchestrator | 2025-06-02 00:01:28.233409 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 00:01:28.233417 | orchestrator | Sunday 01 June 2025 23:59:21 +0000 (0:00:00.592) 0:01:20.899 *********** 2025-06-02 00:01:28.233425 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.233433 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233441 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233448 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:28.233456 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233464 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:28.233471 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:28.233479 | orchestrator | 2025-06-02 00:01:28.233487 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 00:01:28.233495 | orchestrator | Sunday 01 June 2025 23:59:24 +0000 (0:00:03.019) 0:01:23.918 *********** 2025-06-02 00:01:28.233503 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233511 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233527 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.233535 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233542 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233558 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233566 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233579 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233587 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233595 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233603 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233610 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 00:01:28.233618 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233626 | orchestrator | 2025-06-02 00:01:28.233634 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 00:01:28.233641 | orchestrator | Sunday 01 June 2025 23:59:27 +0000 (0:00:02.683) 0:01:26.601 *********** 2025-06-02 00:01:28.233649 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:01:28.233657 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233665 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 00:01:28.233673 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:01:28.233681 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233688 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:01:28.233696 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233704 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:01:28.233712 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233720 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:01:28.233728 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233736 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 00:01:28.233780 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233789 | orchestrator | 2025-06-02 00:01:28.233797 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 00:01:28.233805 | orchestrator | Sunday 01 June 2025 23:59:30 +0000 (0:00:02.844) 0:01:29.446 *********** 2025-06-02 00:01:28.233813 | orchestrator | [WARNING]: Skipped 2025-06-02 00:01:28.233821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 00:01:28.233829 | orchestrator | due to this access issue: 2025-06-02 00:01:28.233837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 00:01:28.233845 | orchestrator | not a directory 2025-06-02 00:01:28.233852 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 00:01:28.233860 | orchestrator | 2025-06-02 00:01:28.233868 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 00:01:28.233876 | orchestrator | Sunday 01 June 2025 23:59:32 +0000 (0:00:01.772) 0:01:31.221 *********** 2025-06-02 00:01:28.233884 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.233892 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233900 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233908 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233916 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.233924 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.233931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.233939 | orchestrator | 2025-06-02 00:01:28.233947 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 00:01:28.233961 | orchestrator | Sunday 01 June 2025 23:59:33 +0000 (0:00:01.605) 0:01:32.827 *********** 2025-06-02 00:01:28.233969 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.233977 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:01:28.233985 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:01:28.233992 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:01:28.233999 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:01:28.234006 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:01:28.234048 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:01:28.234057 | orchestrator | 2025-06-02 00:01:28.234064 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 00:01:28.234071 | orchestrator | Sunday 01 June 2025 23:59:34 +0000 (0:00:00.581) 0:01:33.408 *********** 2025-06-02 00:01:28.234078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234095 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 00:01:28.234103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234124 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 00:01:28.234191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234254 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 00:01:28.234267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 00:01:28.234317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 00:01:28.234342 | orchestrator | 2025-06-02 00:01:28.234349 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 00:01:28.234356 | orchestrator | Sunday 01 June 2025 23:59:38 +0000 (0:00:04.754) 0:01:38.163 *********** 2025-06-02 00:01:28.234363 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 00:01:28.234370 | orchestrator | skipping: [testbed-manager] 2025-06-02 00:01:28.234377 | orchestrator | 2025-06-02 00:01:28.234383 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234390 | orchestrator | Sunday 01 June 2025 23:59:40 +0000 (0:00:01.608) 0:01:39.772 *********** 2025-06-02 00:01:28.234397 | orchestrator | 2025-06-02 00:01:28.234403 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234410 | orchestrator | Sunday 01 June 2025 23:59:40 +0000 (0:00:00.081) 0:01:39.853 *********** 2025-06-02 00:01:28.234417 | orchestrator | 2025-06-02 00:01:28.234423 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234430 | orchestrator | Sunday 01 June 2025 23:59:40 +0000 (0:00:00.088) 0:01:39.941 *********** 2025-06-02 00:01:28.234437 | orchestrator | 2025-06-02 00:01:28.234443 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234450 | orchestrator | Sunday 01 June 2025 23:59:40 +0000 (0:00:00.069) 0:01:40.010 *********** 2025-06-02 00:01:28.234456 | orchestrator | 2025-06-02 00:01:28.234463 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234470 | orchestrator | Sunday 01 June 2025 23:59:40 +0000 (0:00:00.112) 0:01:40.123 *********** 2025-06-02 00:01:28.234476 | orchestrator | 2025-06-02 00:01:28.234483 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234490 | orchestrator | Sunday 01 June 2025 23:59:41 +0000 (0:00:00.399) 0:01:40.523 *********** 2025-06-02 00:01:28.234496 | orchestrator | 2025-06-02 00:01:28.234503 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 00:01:28.234510 | orchestrator | Sunday 01 June 2025 23:59:41 +0000 (0:00:00.085) 0:01:40.609 *********** 2025-06-02 00:01:28.234516 | orchestrator | 2025-06-02 00:01:28.234523 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 00:01:28.234533 | orchestrator | Sunday 01 June 2025 23:59:41 +0000 (0:00:00.100) 0:01:40.709 *********** 2025-06-02 00:01:28.234540 | orchestrator | changed: [testbed-manager] 2025-06-02 00:01:28.234547 | orchestrator | 2025-06-02 00:01:28.234553 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 00:01:28.234563 | orchestrator | Sunday 01 June 2025 23:59:59 +0000 (0:00:17.693) 0:01:58.403 *********** 2025-06-02 00:01:28.234570 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:28.234577 | orchestrator | changed: [testbed-manager] 2025-06-02 00:01:28.234583 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:01:28.234590 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:28.234597 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:01:28.234603 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:01:28.234613 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:28.234620 | orchestrator | 2025-06-02 00:01:28.234627 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 00:01:28.234633 | orchestrator | Monday 02 June 2025 00:00:14 +0000 (0:00:15.632) 0:02:14.036 *********** 2025-06-02 00:01:28.234640 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:28.234647 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:28.234653 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:28.234660 | orchestrator | 2025-06-02 00:01:28.234667 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 00:01:28.234673 | orchestrator | Monday 02 June 2025 00:00:25 +0000 (0:00:10.335) 0:02:24.372 *********** 2025-06-02 00:01:28.234680 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:28.234686 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:28.234693 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:28.234700 | orchestrator | 2025-06-02 00:01:28.234706 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 00:01:28.234713 | orchestrator | Monday 02 June 2025 00:00:34 +0000 (0:00:09.735) 0:02:34.108 *********** 2025-06-02 00:01:28.234720 | orchestrator | changed: [testbed-manager] 2025-06-02 00:01:28.234726 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:01:28.234733 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:28.234740 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:28.234759 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:01:28.234766 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:28.234773 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:01:28.234779 | orchestrator | 2025-06-02 00:01:28.234786 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 00:01:28.234793 | orchestrator | Monday 02 June 2025 00:00:51 +0000 (0:00:16.339) 0:02:50.448 *********** 2025-06-02 00:01:28.234799 | orchestrator | changed: [testbed-manager] 2025-06-02 00:01:28.234806 | orchestrator | 2025-06-02 00:01:28.234813 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 00:01:28.234819 | orchestrator | Monday 02 June 2025 00:01:00 +0000 (0:00:09.217) 0:02:59.665 *********** 2025-06-02 00:01:28.234826 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:01:28.234833 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:01:28.234840 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:01:28.234846 | orchestrator | 2025-06-02 00:01:28.234853 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 00:01:28.234860 | orchestrator | Monday 02 June 2025 00:01:06 +0000 (0:00:05.566) 0:03:05.232 *********** 2025-06-02 00:01:28.234866 | orchestrator | changed: [testbed-manager] 2025-06-02 00:01:28.234873 | orchestrator | 2025-06-02 00:01:28.234880 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 00:01:28.234886 | orchestrator | Monday 02 June 2025 00:01:17 +0000 (0:00:11.015) 0:03:16.247 *********** 2025-06-02 00:01:28.234893 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:01:28.234899 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:01:28.234906 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:01:28.234913 | orchestrator | 2025-06-02 00:01:28.234919 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:01:28.234926 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:01:28.234933 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:01:28.234940 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:01:28.234947 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:01:28.234958 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:01:28.234964 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:01:28.234971 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 00:01:28.234978 | orchestrator | 2025-06-02 00:01:28.234984 | orchestrator | 2025-06-02 00:01:28.234991 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:01:28.234998 | orchestrator | Monday 02 June 2025 00:01:27 +0000 (0:00:10.652) 0:03:26.900 *********** 2025-06-02 00:01:28.235005 | orchestrator | =============================================================================== 2025-06-02 00:01:28.235011 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.28s 2025-06-02 00:01:28.235018 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.32s 2025-06-02 00:01:28.235027 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.69s 2025-06-02 00:01:28.235034 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.34s 2025-06-02 00:01:28.235045 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.63s 2025-06-02 00:01:28.235052 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 11.02s 2025-06-02 00:01:28.235059 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.65s 2025-06-02 00:01:28.235065 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.34s 2025-06-02 00:01:28.235072 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.74s 2025-06-02 00:01:28.235079 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.22s 2025-06-02 00:01:28.235085 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.85s 2025-06-02 00:01:28.235092 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.76s 2025-06-02 00:01:28.235098 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.57s 2025-06-02 00:01:28.235105 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.37s 2025-06-02 00:01:28.235112 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.76s 2025-06-02 00:01:28.235118 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.93s 2025-06-02 00:01:28.235125 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.02s 2025-06-02 00:01:28.235131 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.85s 2025-06-02 00:01:28.235138 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.80s 2025-06-02 00:01:28.235145 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.70s 2025-06-02 00:01:28.235151 | orchestrator | 2025-06-02 00:01:28 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:28.235158 | orchestrator | 2025-06-02 00:01:28 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:28.235165 | orchestrator | 2025-06-02 00:01:28 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:28.235172 | orchestrator | 2025-06-02 00:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:31.283246 | orchestrator | 2025-06-02 00:01:31 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:31.284104 | orchestrator | 2025-06-02 00:01:31 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:31.286342 | orchestrator | 2025-06-02 00:01:31 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:31.287696 | orchestrator | 2025-06-02 00:01:31 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state STARTED 2025-06-02 00:01:31.289300 | orchestrator | 2025-06-02 00:01:31 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:31.289639 | orchestrator | 2025-06-02 00:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:34.343352 | orchestrator | 2025-06-02 00:01:34 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:34.344424 | orchestrator | 2025-06-02 00:01:34 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:34.346229 | orchestrator | 2025-06-02 00:01:34 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:34.347979 | orchestrator | 2025-06-02 00:01:34 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state STARTED 2025-06-02 00:01:34.349820 | orchestrator | 2025-06-02 00:01:34 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:34.349906 | orchestrator | 2025-06-02 00:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:37.397947 | orchestrator | 2025-06-02 00:01:37 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:37.398415 | orchestrator | 2025-06-02 00:01:37 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:37.401719 | orchestrator | 2025-06-02 00:01:37 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:37.405988 | orchestrator | 2025-06-02 00:01:37 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state STARTED 2025-06-02 00:01:37.408194 | orchestrator | 2025-06-02 00:01:37 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:37.408894 | orchestrator | 2025-06-02 00:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:40.456740 | orchestrator | 2025-06-02 00:01:40 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:40.458778 | orchestrator | 2025-06-02 00:01:40 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:40.460063 | orchestrator | 2025-06-02 00:01:40 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:40.461450 | orchestrator | 2025-06-02 00:01:40 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state STARTED 2025-06-02 00:01:40.463038 | orchestrator | 2025-06-02 00:01:40 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:40.463351 | orchestrator | 2025-06-02 00:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:43.515456 | orchestrator | 2025-06-02 00:01:43 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:43.516188 | orchestrator | 2025-06-02 00:01:43 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:43.517794 | orchestrator | 2025-06-02 00:01:43 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:43.520453 | orchestrator | 2025-06-02 00:01:43 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state STARTED 2025-06-02 00:01:43.522241 | orchestrator | 2025-06-02 00:01:43 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:43.522283 | orchestrator | 2025-06-02 00:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:46.573585 | orchestrator | 2025-06-02 00:01:46 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:46.577062 | orchestrator | 2025-06-02 00:01:46 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:46.579170 | orchestrator | 2025-06-02 00:01:46 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:46.581468 | orchestrator | 2025-06-02 00:01:46 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state STARTED 2025-06-02 00:01:46.584745 | orchestrator | 2025-06-02 00:01:46 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:46.585020 | orchestrator | 2025-06-02 00:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:49.634319 | orchestrator | 2025-06-02 00:01:49 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:49.635415 | orchestrator | 2025-06-02 00:01:49 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:49.637180 | orchestrator | 2025-06-02 00:01:49 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:49.638337 | orchestrator | 2025-06-02 00:01:49 | INFO  | Task 36adf195-6349-413e-bc46-c2dd39ecd651 is in state SUCCESS 2025-06-02 00:01:49.639867 | orchestrator | 2025-06-02 00:01:49 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:49.639927 | orchestrator | 2025-06-02 00:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:52.687911 | orchestrator | 2025-06-02 00:01:52 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:52.688774 | orchestrator | 2025-06-02 00:01:52 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:52.692343 | orchestrator | 2025-06-02 00:01:52 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:52.694250 | orchestrator | 2025-06-02 00:01:52 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:52.694332 | orchestrator | 2025-06-02 00:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:55.735689 | orchestrator | 2025-06-02 00:01:55 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:55.735828 | orchestrator | 2025-06-02 00:01:55 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:55.735845 | orchestrator | 2025-06-02 00:01:55 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:55.735871 | orchestrator | 2025-06-02 00:01:55 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:55.735883 | orchestrator | 2025-06-02 00:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:01:58.773658 | orchestrator | 2025-06-02 00:01:58 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:01:58.774212 | orchestrator | 2025-06-02 00:01:58 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:01:58.775070 | orchestrator | 2025-06-02 00:01:58 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:01:58.775923 | orchestrator | 2025-06-02 00:01:58 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:01:58.775961 | orchestrator | 2025-06-02 00:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:01.818257 | orchestrator | 2025-06-02 00:02:01 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:01.818362 | orchestrator | 2025-06-02 00:02:01 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:01.821302 | orchestrator | 2025-06-02 00:02:01 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:01.821980 | orchestrator | 2025-06-02 00:02:01 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:01.822088 | orchestrator | 2025-06-02 00:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:04.866344 | orchestrator | 2025-06-02 00:02:04 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:04.866447 | orchestrator | 2025-06-02 00:02:04 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:04.868575 | orchestrator | 2025-06-02 00:02:04 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:04.872320 | orchestrator | 2025-06-02 00:02:04 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:04.872365 | orchestrator | 2025-06-02 00:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:07.905152 | orchestrator | 2025-06-02 00:02:07 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:07.905237 | orchestrator | 2025-06-02 00:02:07 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:07.905735 | orchestrator | 2025-06-02 00:02:07 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:07.906597 | orchestrator | 2025-06-02 00:02:07 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:07.906607 | orchestrator | 2025-06-02 00:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:10.939710 | orchestrator | 2025-06-02 00:02:10 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:10.940201 | orchestrator | 2025-06-02 00:02:10 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:10.941824 | orchestrator | 2025-06-02 00:02:10 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:10.942532 | orchestrator | 2025-06-02 00:02:10 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:10.942548 | orchestrator | 2025-06-02 00:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:13.981469 | orchestrator | 2025-06-02 00:02:13 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:13.981538 | orchestrator | 2025-06-02 00:02:13 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:13.982818 | orchestrator | 2025-06-02 00:02:13 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:13.985557 | orchestrator | 2025-06-02 00:02:13 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:13.985636 | orchestrator | 2025-06-02 00:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:17.021452 | orchestrator | 2025-06-02 00:02:17 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:17.023602 | orchestrator | 2025-06-02 00:02:17 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:17.025955 | orchestrator | 2025-06-02 00:02:17 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:17.028075 | orchestrator | 2025-06-02 00:02:17 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:17.028448 | orchestrator | 2025-06-02 00:02:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:20.057209 | orchestrator | 2025-06-02 00:02:20 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:20.057689 | orchestrator | 2025-06-02 00:02:20 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:20.058745 | orchestrator | 2025-06-02 00:02:20 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:20.062654 | orchestrator | 2025-06-02 00:02:20 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:20.062694 | orchestrator | 2025-06-02 00:02:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:23.097263 | orchestrator | 2025-06-02 00:02:23 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:23.097638 | orchestrator | 2025-06-02 00:02:23 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:23.098894 | orchestrator | 2025-06-02 00:02:23 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:23.101169 | orchestrator | 2025-06-02 00:02:23 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:23.101224 | orchestrator | 2025-06-02 00:02:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:26.136534 | orchestrator | 2025-06-02 00:02:26 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:26.136630 | orchestrator | 2025-06-02 00:02:26 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:26.139460 | orchestrator | 2025-06-02 00:02:26 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:26.139704 | orchestrator | 2025-06-02 00:02:26 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:26.139729 | orchestrator | 2025-06-02 00:02:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:29.176900 | orchestrator | 2025-06-02 00:02:29 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:29.178251 | orchestrator | 2025-06-02 00:02:29 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:29.180603 | orchestrator | 2025-06-02 00:02:29 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:29.182715 | orchestrator | 2025-06-02 00:02:29 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:29.182822 | orchestrator | 2025-06-02 00:02:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:32.215949 | orchestrator | 2025-06-02 00:02:32 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:32.216305 | orchestrator | 2025-06-02 00:02:32 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:32.220166 | orchestrator | 2025-06-02 00:02:32 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:32.220258 | orchestrator | 2025-06-02 00:02:32 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:32.220283 | orchestrator | 2025-06-02 00:02:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:35.254235 | orchestrator | 2025-06-02 00:02:35 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:35.254354 | orchestrator | 2025-06-02 00:02:35 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:35.255881 | orchestrator | 2025-06-02 00:02:35 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:35.256595 | orchestrator | 2025-06-02 00:02:35 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:35.256862 | orchestrator | 2025-06-02 00:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:38.325282 | orchestrator | 2025-06-02 00:02:38 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:38.327808 | orchestrator | 2025-06-02 00:02:38 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:38.328536 | orchestrator | 2025-06-02 00:02:38 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:38.329239 | orchestrator | 2025-06-02 00:02:38 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state STARTED 2025-06-02 00:02:38.329279 | orchestrator | 2025-06-02 00:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:41.367250 | orchestrator | 2025-06-02 00:02:41 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:41.369529 | orchestrator | 2025-06-02 00:02:41 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:41.372091 | orchestrator | 2025-06-02 00:02:41 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:41.372788 | orchestrator | 2025-06-02 00:02:41 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:41.375869 | orchestrator | 2025-06-02 00:02:41 | INFO  | Task 0bdb5bff-f695-4264-8b60-bb1557704b5c is in state SUCCESS 2025-06-02 00:02:41.377733 | orchestrator | 2025-06-02 00:02:41.377811 | orchestrator | None 2025-06-02 00:02:41.377823 | orchestrator | 2025-06-02 00:02:41.377832 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:02:41.377842 | orchestrator | 2025-06-02 00:02:41.377850 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:02:41.377871 | orchestrator | Sunday 01 June 2025 23:58:35 +0000 (0:00:00.699) 0:00:00.701 *********** 2025-06-02 00:02:41.377880 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:02:41.377889 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:02:41.377897 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:02:41.377905 | orchestrator | ok: [testbed-node-3] 2025-06-02 00:02:41.377913 | orchestrator | ok: [testbed-node-4] 2025-06-02 00:02:41.377921 | orchestrator | ok: [testbed-node-5] 2025-06-02 00:02:41.377929 | orchestrator | 2025-06-02 00:02:41.377937 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:02:41.377945 | orchestrator | Sunday 01 June 2025 23:58:36 +0000 (0:00:01.472) 0:00:02.174 *********** 2025-06-02 00:02:41.377953 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 00:02:41.377962 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 00:02:41.377970 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 00:02:41.377977 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 00:02:41.377985 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 00:02:41.377993 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 00:02:41.378001 | orchestrator | 2025-06-02 00:02:41.378009 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 00:02:41.378106 | orchestrator | 2025-06-02 00:02:41.378117 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:02:41.378125 | orchestrator | Sunday 01 June 2025 23:58:38 +0000 (0:00:01.401) 0:00:03.575 *********** 2025-06-02 00:02:41.378133 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:02:41.378143 | orchestrator | 2025-06-02 00:02:41.378151 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 00:02:41.378214 | orchestrator | Sunday 01 June 2025 23:58:40 +0000 (0:00:02.336) 0:00:05.911 *********** 2025-06-02 00:02:41.378237 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 00:02:41.378278 | orchestrator | 2025-06-02 00:02:41.378287 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 00:02:41.378295 | orchestrator | Sunday 01 June 2025 23:58:43 +0000 (0:00:03.065) 0:00:08.977 *********** 2025-06-02 00:02:41.378306 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 00:02:41.378316 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 00:02:41.378325 | orchestrator | 2025-06-02 00:02:41.378335 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 00:02:41.378345 | orchestrator | Sunday 01 June 2025 23:58:50 +0000 (0:00:06.368) 0:00:15.345 *********** 2025-06-02 00:02:41.378356 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 00:02:41.378365 | orchestrator | 2025-06-02 00:02:41.378375 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 00:02:41.378383 | orchestrator | Sunday 01 June 2025 23:58:53 +0000 (0:00:03.070) 0:00:18.416 *********** 2025-06-02 00:02:41.378391 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 00:02:41.378399 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 00:02:41.378407 | orchestrator | 2025-06-02 00:02:41.378415 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 00:02:41.378423 | orchestrator | Sunday 01 June 2025 23:58:56 +0000 (0:00:03.798) 0:00:22.215 *********** 2025-06-02 00:02:41.378431 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 00:02:41.378439 | orchestrator | 2025-06-02 00:02:41.378446 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 00:02:41.378454 | orchestrator | Sunday 01 June 2025 23:59:00 +0000 (0:00:03.182) 0:00:25.397 *********** 2025-06-02 00:02:41.378462 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 00:02:41.378470 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 00:02:41.378478 | orchestrator | 2025-06-02 00:02:41.378486 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 00:02:41.378494 | orchestrator | Sunday 01 June 2025 23:59:07 +0000 (0:00:07.549) 0:00:32.946 *********** 2025-06-02 00:02:41.378538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.378552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.378567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.378576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379228 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.379306 | orchestrator | 2025-06-02 00:02:41.379338 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:02:41.379348 | orchestrator | Sunday 01 June 2025 23:59:10 +0000 (0:00:03.102) 0:00:36.049 *********** 2025-06-02 00:02:41.379356 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.379364 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.379372 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.379439 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.379468 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.379476 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.379484 | orchestrator | 2025-06-02 00:02:41.379492 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:02:41.379500 | orchestrator | Sunday 01 June 2025 23:59:11 +0000 (0:00:00.662) 0:00:36.711 *********** 2025-06-02 00:02:41.379508 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.379516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.379524 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.379532 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:02:41.379540 | orchestrator | 2025-06-02 00:02:41.379548 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 00:02:41.379556 | orchestrator | Sunday 01 June 2025 23:59:12 +0000 (0:00:01.028) 0:00:37.740 *********** 2025-06-02 00:02:41.379564 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 00:02:41.379572 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 00:02:41.379580 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 00:02:41.379588 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 00:02:41.379596 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 00:02:41.379604 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 00:02:41.379612 | orchestrator | 2025-06-02 00:02:41.379620 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 00:02:41.379628 | orchestrator | Sunday 01 June 2025 23:59:15 +0000 (0:00:03.032) 0:00:40.772 *********** 2025-06-02 00:02:41.379637 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:02:41.379646 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:02:41.379662 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:02:41.379701 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:02:41.379712 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:02:41.379720 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 00:02:41.379729 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:02:41.379743 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:02:41.379841 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:02:41.379852 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:02:41.379863 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:02:41.379871 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 00:02:41.379880 | orchestrator | 2025-06-02 00:02:41.379888 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 00:02:41.379896 | orchestrator | Sunday 01 June 2025 23:59:20 +0000 (0:00:05.113) 0:00:45.886 *********** 2025-06-02 00:02:41.379904 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:02:41.379932 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:02:41.379941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 00:02:41.379949 | orchestrator | 2025-06-02 00:02:41.379957 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 00:02:41.379965 | orchestrator | Sunday 01 June 2025 23:59:22 +0000 (0:00:01.683) 0:00:47.570 *********** 2025-06-02 00:02:41.379973 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 00:02:41.379981 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 00:02:41.379989 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 00:02:41.380002 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:02:41.380010 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:02:41.380042 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 00:02:41.380052 | orchestrator | 2025-06-02 00:02:41.380060 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 00:02:41.380068 | orchestrator | Sunday 01 June 2025 23:59:26 +0000 (0:00:03.769) 0:00:51.342 *********** 2025-06-02 00:02:41.380076 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 00:02:41.380084 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 00:02:41.380092 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 00:02:41.380100 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 00:02:41.380108 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 00:02:41.380116 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 00:02:41.380124 | orchestrator | 2025-06-02 00:02:41.380132 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 00:02:41.380140 | orchestrator | Sunday 01 June 2025 23:59:27 +0000 (0:00:01.439) 0:00:52.782 *********** 2025-06-02 00:02:41.380148 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.380156 | orchestrator | 2025-06-02 00:02:41.380164 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 00:02:41.380172 | orchestrator | Sunday 01 June 2025 23:59:27 +0000 (0:00:00.130) 0:00:52.912 *********** 2025-06-02 00:02:41.380180 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.380188 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.380195 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.380203 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.380211 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.380219 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.380227 | orchestrator | 2025-06-02 00:02:41.380235 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:02:41.380243 | orchestrator | Sunday 01 June 2025 23:59:29 +0000 (0:00:01.739) 0:00:54.651 *********** 2025-06-02 00:02:41.380264 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 00:02:41.380274 | orchestrator | 2025-06-02 00:02:41.380282 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 00:02:41.380290 | orchestrator | Sunday 01 June 2025 23:59:31 +0000 (0:00:01.890) 0:00:56.542 *********** 2025-06-02 00:02:41.380299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.380324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.380361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.380380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.380487 | orchestrator | 2025-06-02 00:02:41.380495 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 00:02:41.380503 | orchestrator | Sunday 01 June 2025 23:59:34 +0000 (0:00:03.363) 0:00:59.905 *********** 2025-06-02 00:02:41.380512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.380538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.380557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.380579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380587 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.380595 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.380603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.380616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380639 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.380647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380672 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.380681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380697 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.380705 | orchestrator | 2025-06-02 00:02:41.380713 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 00:02:41.380725 | orchestrator | Sunday 01 June 2025 23:59:37 +0000 (0:00:02.599) 0:01:02.505 *********** 2025-06-02 00:02:41.380738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.380747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380760 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.380812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.380821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.380848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.380865 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.380873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380909 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.380923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.380962 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.380990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381046 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.381060 | orchestrator | 2025-06-02 00:02:41.381074 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 00:02:41.381087 | orchestrator | Sunday 01 June 2025 23:59:40 +0000 (0:00:02.874) 0:01:05.380 *********** 2025-06-02 00:02:41.381098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.381112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.381126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.381170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381252 | orchestrator | 2025-06-02 00:02:41.381260 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 00:02:41.381268 | orchestrator | Sunday 01 June 2025 23:59:43 +0000 (0:00:03.201) 0:01:08.582 *********** 2025-06-02 00:02:41.381276 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 00:02:41.381285 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.381293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 00:02:41.381301 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 00:02:41.381309 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.381317 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 00:02:41.381325 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.381335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 00:02:41.381345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 00:02:41.381354 | orchestrator | 2025-06-02 00:02:41.381364 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 00:02:41.381373 | orchestrator | Sunday 01 June 2025 23:59:46 +0000 (0:00:02.862) 0:01:11.444 *********** 2025-06-02 00:02:41.381383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.381411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.381433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.381443 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381463 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.381554 | orchestrator | 2025-06-02 00:02:41.381568 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 00:02:41.381579 | orchestrator | Sunday 01 June 2025 23:59:56 +0000 (0:00:10.612) 0:01:22.057 *********** 2025-06-02 00:02:41.381593 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.381603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.381613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.381623 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:02:41.381632 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:02:41.381642 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:02:41.381652 | orchestrator | 2025-06-02 00:02:41.381661 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 00:02:41.381671 | orchestrator | Sunday 01 June 2025 23:59:58 +0000 (0:00:01.782) 0:01:23.840 *********** 2025-06-02 00:02:41.381681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.381692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381702 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.381712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.381722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381738 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.381758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 00:02:41.381799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381810 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.381820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381840 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.381851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381871 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381898 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.381908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 00:02:41.381918 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.381927 | orchestrator | 2025-06-02 00:02:41.381937 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 00:02:41.381947 | orchestrator | Sunday 01 June 2025 23:59:59 +0000 (0:00:00.987) 0:01:24.827 *********** 2025-06-02 00:02:41.381956 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.381966 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.381976 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.381985 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.381995 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.382004 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.382043 | orchestrator | 2025-06-02 00:02:41.382055 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 00:02:41.382065 | orchestrator | Monday 02 June 2025 00:00:01 +0000 (0:00:02.011) 0:01:26.839 *********** 2025-06-02 00:02:41.382076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.382101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.382159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 00:02:41.382179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382291 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 00:02:41.382330 | orchestrator | 2025-06-02 00:02:41.382341 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 00:02:41.382350 | orchestrator | Monday 02 June 2025 00:00:05 +0000 (0:00:04.248) 0:01:31.087 *********** 2025-06-02 00:02:41.382360 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.382371 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:02:41.382380 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:02:41.382390 | orchestrator | skipping: [testbed-node-3] 2025-06-02 00:02:41.382400 | orchestrator | skipping: [testbed-node-4] 2025-06-02 00:02:41.382409 | orchestrator | skipping: [testbed-node-5] 2025-06-02 00:02:41.382420 | orchestrator | 2025-06-02 00:02:41.382430 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 00:02:41.382440 | orchestrator | Monday 02 June 2025 00:00:06 +0000 (0:00:00.964) 0:01:32.052 *********** 2025-06-02 00:02:41.382449 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:02:41.382459 | orchestrator | 2025-06-02 00:02:41.382469 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 00:02:41.382478 | orchestrator | Monday 02 June 2025 00:00:08 +0000 (0:00:02.019) 0:01:34.071 *********** 2025-06-02 00:02:41.382488 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:02:41.382498 | orchestrator | 2025-06-02 00:02:41.382507 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 00:02:41.382517 | orchestrator | Monday 02 June 2025 00:00:10 +0000 (0:00:02.183) 0:01:36.255 *********** 2025-06-02 00:02:41.382527 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:02:41.382537 | orchestrator | 2025-06-02 00:02:41.382551 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:02:41.382561 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:18.546) 0:01:54.801 *********** 2025-06-02 00:02:41.382571 | orchestrator | 2025-06-02 00:02:41.382586 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:02:41.382596 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:00.075) 0:01:54.877 *********** 2025-06-02 00:02:41.382606 | orchestrator | 2025-06-02 00:02:41.382616 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:02:41.382626 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:00.066) 0:01:54.943 *********** 2025-06-02 00:02:41.382636 | orchestrator | 2025-06-02 00:02:41.382646 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:02:41.382655 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:00.064) 0:01:55.007 *********** 2025-06-02 00:02:41.382665 | orchestrator | 2025-06-02 00:02:41.382674 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:02:41.382684 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:00.063) 0:01:55.071 *********** 2025-06-02 00:02:41.382694 | orchestrator | 2025-06-02 00:02:41.382703 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 00:02:41.382713 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:00.063) 0:01:55.134 *********** 2025-06-02 00:02:41.382729 | orchestrator | 2025-06-02 00:02:41.382739 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 00:02:41.382749 | orchestrator | Monday 02 June 2025 00:00:29 +0000 (0:00:00.060) 0:01:55.195 *********** 2025-06-02 00:02:41.382758 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:02:41.382795 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:02:41.382805 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:02:41.382815 | orchestrator | 2025-06-02 00:02:41.382825 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 00:02:41.382834 | orchestrator | Monday 02 June 2025 00:00:51 +0000 (0:00:21.508) 0:02:16.704 *********** 2025-06-02 00:02:41.382844 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:02:41.382854 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:02:41.382864 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:02:41.382873 | orchestrator | 2025-06-02 00:02:41.382883 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 00:02:41.382892 | orchestrator | Monday 02 June 2025 00:01:03 +0000 (0:00:11.606) 0:02:28.310 *********** 2025-06-02 00:02:41.382902 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:02:41.382912 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:02:41.382921 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:02:41.382931 | orchestrator | 2025-06-02 00:02:41.382946 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 00:02:41.382956 | orchestrator | Monday 02 June 2025 00:02:23 +0000 (0:01:20.420) 0:03:48.731 *********** 2025-06-02 00:02:41.382966 | orchestrator | changed: [testbed-node-4] 2025-06-02 00:02:41.382975 | orchestrator | changed: [testbed-node-3] 2025-06-02 00:02:41.382985 | orchestrator | changed: [testbed-node-5] 2025-06-02 00:02:41.382995 | orchestrator | 2025-06-02 00:02:41.383004 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 00:02:41.383014 | orchestrator | Monday 02 June 2025 00:02:37 +0000 (0:00:13.756) 0:04:02.488 *********** 2025-06-02 00:02:41.383024 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:02:41.383034 | orchestrator | 2025-06-02 00:02:41.383043 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:02:41.383054 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 00:02:41.383065 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 00:02:41.383075 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 00:02:41.383085 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:02:41.383095 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:02:41.383104 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 00:02:41.383114 | orchestrator | 2025-06-02 00:02:41.383124 | orchestrator | 2025-06-02 00:02:41.383134 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:02:41.383144 | orchestrator | Monday 02 June 2025 00:02:38 +0000 (0:00:01.721) 0:04:04.210 *********** 2025-06-02 00:02:41.383154 | orchestrator | =============================================================================== 2025-06-02 00:02:41.383163 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 80.42s 2025-06-02 00:02:41.383173 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.51s 2025-06-02 00:02:41.383190 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.55s 2025-06-02 00:02:41.383199 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.76s 2025-06-02 00:02:41.383212 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.61s 2025-06-02 00:02:41.383228 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.61s 2025-06-02 00:02:41.383251 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.55s 2025-06-02 00:02:41.383268 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.37s 2025-06-02 00:02:41.383319 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.11s 2025-06-02 00:02:41.383339 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.25s 2025-06-02 00:02:41.383358 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.80s 2025-06-02 00:02:41.383375 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.77s 2025-06-02 00:02:41.383392 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.37s 2025-06-02 00:02:41.383402 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.20s 2025-06-02 00:02:41.383412 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.18s 2025-06-02 00:02:41.383422 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.10s 2025-06-02 00:02:41.383432 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.07s 2025-06-02 00:02:41.383441 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.07s 2025-06-02 00:02:41.383452 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.03s 2025-06-02 00:02:41.383461 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.87s 2025-06-02 00:02:41.383472 | orchestrator | 2025-06-02 00:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:44.430891 | orchestrator | 2025-06-02 00:02:44 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:44.431618 | orchestrator | 2025-06-02 00:02:44 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:44.432755 | orchestrator | 2025-06-02 00:02:44 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:44.433888 | orchestrator | 2025-06-02 00:02:44 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:44.433917 | orchestrator | 2025-06-02 00:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:47.474530 | orchestrator | 2025-06-02 00:02:47 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:47.474680 | orchestrator | 2025-06-02 00:02:47 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:47.476163 | orchestrator | 2025-06-02 00:02:47 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:47.478135 | orchestrator | 2025-06-02 00:02:47 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:47.478174 | orchestrator | 2025-06-02 00:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:50.518160 | orchestrator | 2025-06-02 00:02:50 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:50.518610 | orchestrator | 2025-06-02 00:02:50 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:50.520323 | orchestrator | 2025-06-02 00:02:50 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:50.520982 | orchestrator | 2025-06-02 00:02:50 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:50.521153 | orchestrator | 2025-06-02 00:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:53.561552 | orchestrator | 2025-06-02 00:02:53 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:53.563961 | orchestrator | 2025-06-02 00:02:53 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:53.565635 | orchestrator | 2025-06-02 00:02:53 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:53.567153 | orchestrator | 2025-06-02 00:02:53 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:53.567191 | orchestrator | 2025-06-02 00:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:56.600887 | orchestrator | 2025-06-02 00:02:56 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:56.601289 | orchestrator | 2025-06-02 00:02:56 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:56.604512 | orchestrator | 2025-06-02 00:02:56 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:56.605246 | orchestrator | 2025-06-02 00:02:56 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:56.605284 | orchestrator | 2025-06-02 00:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:02:59.657169 | orchestrator | 2025-06-02 00:02:59 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:02:59.657337 | orchestrator | 2025-06-02 00:02:59 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:02:59.657944 | orchestrator | 2025-06-02 00:02:59 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:02:59.658661 | orchestrator | 2025-06-02 00:02:59 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:02:59.658702 | orchestrator | 2025-06-02 00:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:02.692680 | orchestrator | 2025-06-02 00:03:02 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:02.694278 | orchestrator | 2025-06-02 00:03:02 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:02.695144 | orchestrator | 2025-06-02 00:03:02 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:02.695889 | orchestrator | 2025-06-02 00:03:02 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:02.695962 | orchestrator | 2025-06-02 00:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:05.725536 | orchestrator | 2025-06-02 00:03:05 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:05.725907 | orchestrator | 2025-06-02 00:03:05 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:05.726881 | orchestrator | 2025-06-02 00:03:05 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:05.727897 | orchestrator | 2025-06-02 00:03:05 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:05.727917 | orchestrator | 2025-06-02 00:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:08.774511 | orchestrator | 2025-06-02 00:03:08 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:08.775488 | orchestrator | 2025-06-02 00:03:08 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:08.776892 | orchestrator | 2025-06-02 00:03:08 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:08.778114 | orchestrator | 2025-06-02 00:03:08 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:08.778146 | orchestrator | 2025-06-02 00:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:11.811464 | orchestrator | 2025-06-02 00:03:11 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:11.811696 | orchestrator | 2025-06-02 00:03:11 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:11.812679 | orchestrator | 2025-06-02 00:03:11 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:11.815007 | orchestrator | 2025-06-02 00:03:11 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:11.815040 | orchestrator | 2025-06-02 00:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:14.856702 | orchestrator | 2025-06-02 00:03:14 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:14.856845 | orchestrator | 2025-06-02 00:03:14 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:14.856862 | orchestrator | 2025-06-02 00:03:14 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:14.857982 | orchestrator | 2025-06-02 00:03:14 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:14.858127 | orchestrator | 2025-06-02 00:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:17.891047 | orchestrator | 2025-06-02 00:03:17 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:17.891406 | orchestrator | 2025-06-02 00:03:17 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:17.892180 | orchestrator | 2025-06-02 00:03:17 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:17.893202 | orchestrator | 2025-06-02 00:03:17 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:17.893225 | orchestrator | 2025-06-02 00:03:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:20.916217 | orchestrator | 2025-06-02 00:03:20 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:20.916926 | orchestrator | 2025-06-02 00:03:20 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:20.917604 | orchestrator | 2025-06-02 00:03:20 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:20.918808 | orchestrator | 2025-06-02 00:03:20 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:20.918843 | orchestrator | 2025-06-02 00:03:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:23.948202 | orchestrator | 2025-06-02 00:03:23 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:24.337733 | orchestrator | 2025-06-02 00:03:23 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:24.337853 | orchestrator | 2025-06-02 00:03:23 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:24.337868 | orchestrator | 2025-06-02 00:03:23 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:24.337880 | orchestrator | 2025-06-02 00:03:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:26.997707 | orchestrator | 2025-06-02 00:03:26 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:26.997901 | orchestrator | 2025-06-02 00:03:26 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:26.997934 | orchestrator | 2025-06-02 00:03:26 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:26.998549 | orchestrator | 2025-06-02 00:03:26 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:26.998623 | orchestrator | 2025-06-02 00:03:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:30.039640 | orchestrator | 2025-06-02 00:03:30 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:30.041244 | orchestrator | 2025-06-02 00:03:30 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:30.041294 | orchestrator | 2025-06-02 00:03:30 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:30.044159 | orchestrator | 2025-06-02 00:03:30 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:30.044191 | orchestrator | 2025-06-02 00:03:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:33.065234 | orchestrator | 2025-06-02 00:03:33 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:33.065576 | orchestrator | 2025-06-02 00:03:33 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:33.066381 | orchestrator | 2025-06-02 00:03:33 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state STARTED 2025-06-02 00:03:33.067655 | orchestrator | 2025-06-02 00:03:33 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:33.067746 | orchestrator | 2025-06-02 00:03:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:36.092719 | orchestrator | 2025-06-02 00:03:36 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:36.095615 | orchestrator | 2025-06-02 00:03:36 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:36.099917 | orchestrator | 2025-06-02 00:03:36 | INFO  | Task 559f1605-31b8-4240-9c6c-1924a5b25755 is in state SUCCESS 2025-06-02 00:03:36.101018 | orchestrator | 2025-06-02 00:03:36.101058 | orchestrator | 2025-06-02 00:03:36.101071 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 00:03:36.101084 | orchestrator | 2025-06-02 00:03:36.101095 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 00:03:36.101106 | orchestrator | Monday 02 June 2025 00:01:32 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-02 00:03:36.101118 | orchestrator | ok: [testbed-node-0] 2025-06-02 00:03:36.101130 | orchestrator | ok: [testbed-node-1] 2025-06-02 00:03:36.101566 | orchestrator | ok: [testbed-node-2] 2025-06-02 00:03:36.101594 | orchestrator | 2025-06-02 00:03:36.101606 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 00:03:36.101618 | orchestrator | Monday 02 June 2025 00:01:32 +0000 (0:00:00.304) 0:00:00.577 *********** 2025-06-02 00:03:36.101629 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 00:03:36.101642 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 00:03:36.101653 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 00:03:36.101664 | orchestrator | 2025-06-02 00:03:36.101675 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 00:03:36.101687 | orchestrator | 2025-06-02 00:03:36.101698 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 00:03:36.101709 | orchestrator | Monday 02 June 2025 00:01:32 +0000 (0:00:00.418) 0:00:00.995 *********** 2025-06-02 00:03:36.101737 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:03:36.101806 | orchestrator | 2025-06-02 00:03:36.101819 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 00:03:36.101830 | orchestrator | Monday 02 June 2025 00:01:33 +0000 (0:00:00.570) 0:00:01.566 *********** 2025-06-02 00:03:36.101842 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 00:03:36.101853 | orchestrator | 2025-06-02 00:03:36.101863 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 00:03:36.101898 | orchestrator | Monday 02 June 2025 00:01:36 +0000 (0:00:03.328) 0:00:04.895 *********** 2025-06-02 00:03:36.101910 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 00:03:36.101921 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 00:03:36.101932 | orchestrator | 2025-06-02 00:03:36.101943 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 00:03:36.101954 | orchestrator | Monday 02 June 2025 00:01:43 +0000 (0:00:06.405) 0:00:11.300 *********** 2025-06-02 00:03:36.101964 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 00:03:36.101976 | orchestrator | 2025-06-02 00:03:36.101986 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 00:03:36.101997 | orchestrator | Monday 02 June 2025 00:01:46 +0000 (0:00:03.130) 0:00:14.431 *********** 2025-06-02 00:03:36.102008 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 00:03:36.102070 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 00:03:36.102085 | orchestrator | 2025-06-02 00:03:36.102095 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 00:03:36.102106 | orchestrator | Monday 02 June 2025 00:01:50 +0000 (0:00:03.790) 0:00:18.221 *********** 2025-06-02 00:03:36.102117 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 00:03:36.102129 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 00:03:36.102139 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 00:03:36.102150 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 00:03:36.102161 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 00:03:36.102175 | orchestrator | 2025-06-02 00:03:36.102187 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 00:03:36.102200 | orchestrator | Monday 02 June 2025 00:02:05 +0000 (0:00:15.307) 0:00:33.529 *********** 2025-06-02 00:03:36.102213 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 00:03:36.102226 | orchestrator | 2025-06-02 00:03:36.102239 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 00:03:36.102252 | orchestrator | Monday 02 June 2025 00:02:09 +0000 (0:00:04.088) 0:00:37.618 *********** 2025-06-02 00:03:36.102269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.102300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.102333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.102378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102456 | orchestrator | 2025-06-02 00:03:36.102469 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 00:03:36.102482 | orchestrator | Monday 02 June 2025 00:02:11 +0000 (0:00:02.370) 0:00:39.988 *********** 2025-06-02 00:03:36.102496 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 00:03:36.102509 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 00:03:36.102522 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 00:03:36.102535 | orchestrator | 2025-06-02 00:03:36.102546 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 00:03:36.102557 | orchestrator | Monday 02 June 2025 00:02:13 +0000 (0:00:01.710) 0:00:41.698 *********** 2025-06-02 00:03:36.102568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.102579 | orchestrator | 2025-06-02 00:03:36.102590 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 00:03:36.102600 | orchestrator | Monday 02 June 2025 00:02:13 +0000 (0:00:00.134) 0:00:41.832 *********** 2025-06-02 00:03:36.102611 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.102622 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:03:36.102633 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:03:36.102644 | orchestrator | 2025-06-02 00:03:36.102655 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 00:03:36.102666 | orchestrator | Monday 02 June 2025 00:02:14 +0000 (0:00:00.932) 0:00:42.765 *********** 2025-06-02 00:03:36.102677 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 00:03:36.102688 | orchestrator | 2025-06-02 00:03:36.102699 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 00:03:36.102710 | orchestrator | Monday 02 June 2025 00:02:16 +0000 (0:00:01.528) 0:00:44.293 *********** 2025-06-02 00:03:36.102721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.102749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.102795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.102808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.102895 | orchestrator | 2025-06-02 00:03:36.102910 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 00:03:36.102922 | orchestrator | Monday 02 June 2025 00:02:19 +0000 (0:00:03.445) 0:00:47.739 *********** 2025-06-02 00:03:36.102934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.102946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.102958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.102977 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.102996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.103009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103038 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:03:36.103050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.103061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103093 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:03:36.103104 | orchestrator | 2025-06-02 00:03:36.103115 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 00:03:36.103126 | orchestrator | Monday 02 June 2025 00:02:20 +0000 (0:00:00.936) 0:00:48.676 *********** 2025-06-02 00:03:36.103144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.103162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103186 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.103197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.103222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103245 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:03:36.103265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.103282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.103312 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:03:36.103323 | orchestrator | 2025-06-02 00:03:36.103334 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 00:03:36.103345 | orchestrator | Monday 02 June 2025 00:02:21 +0000 (0:00:01.320) 0:00:49.996 *********** 2025-06-02 00:03:36.103357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.103374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.103392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.103404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103548 | orchestrator | 2025-06-02 00:03:36.103559 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 00:03:36.103570 | orchestrator | Monday 02 June 2025 00:02:26 +0000 (0:00:04.630) 0:00:54.627 *********** 2025-06-02 00:03:36.103581 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:03:36.103593 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:03:36.103604 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.103615 | orchestrator | 2025-06-02 00:03:36.103631 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 00:03:36.103642 | orchestrator | Monday 02 June 2025 00:02:29 +0000 (0:00:02.847) 0:00:57.474 *********** 2025-06-02 00:03:36.103653 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 00:03:36.103664 | orchestrator | 2025-06-02 00:03:36.103675 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 00:03:36.103686 | orchestrator | Monday 02 June 2025 00:02:30 +0000 (0:00:01.330) 0:00:58.805 *********** 2025-06-02 00:03:36.103697 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.103714 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:03:36.103726 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:03:36.103737 | orchestrator | 2025-06-02 00:03:36.103747 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 00:03:36.103787 | orchestrator | Monday 02 June 2025 00:02:31 +0000 (0:00:01.225) 0:01:00.030 *********** 2025-06-02 00:03:36.103807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.103827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.103856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.103872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.103954 | orchestrator | 2025-06-02 00:03:36.103965 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 00:03:36.103977 | orchestrator | Monday 02 June 2025 00:02:41 +0000 (0:00:09.853) 0:01:09.884 *********** 2025-06-02 00:03:36.103995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.104019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.104031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.104042 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.104054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.104065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.104083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.104095 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:03:36.104107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 00:03:36.104128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.104140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 00:03:36.104151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:03:36.104163 | orchestrator | 2025-06-02 00:03:36.104174 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 00:03:36.104185 | orchestrator | Monday 02 June 2025 00:02:43 +0000 (0:00:01.461) 0:01:11.345 *********** 2025-06-02 00:03:36.104196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.104247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.104272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 00:03:36.104284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.104295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.104307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.104318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.104338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.104360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 00:03:36.104372 | orchestrator | 2025-06-02 00:03:36.104388 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 00:03:36.104399 | orchestrator | Monday 02 June 2025 00:02:46 +0000 (0:00:03.147) 0:01:14.493 *********** 2025-06-02 00:03:36.104410 | orchestrator | skipping: [testbed-node-0] 2025-06-02 00:03:36.104421 | orchestrator | skipping: [testbed-node-1] 2025-06-02 00:03:36.104432 | orchestrator | skipping: [testbed-node-2] 2025-06-02 00:03:36.104443 | orchestrator | 2025-06-02 00:03:36.104454 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 00:03:36.104465 | orchestrator | Monday 02 June 2025 00:02:46 +0000 (0:00:00.451) 0:01:14.945 *********** 2025-06-02 00:03:36.104475 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.104486 | orchestrator | 2025-06-02 00:03:36.104497 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 00:03:36.104508 | orchestrator | Monday 02 June 2025 00:02:48 +0000 (0:00:02.033) 0:01:16.978 *********** 2025-06-02 00:03:36.104519 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.104529 | orchestrator | 2025-06-02 00:03:36.104540 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 00:03:36.104551 | orchestrator | Monday 02 June 2025 00:02:51 +0000 (0:00:02.134) 0:01:19.113 *********** 2025-06-02 00:03:36.104562 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.104572 | orchestrator | 2025-06-02 00:03:36.104583 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 00:03:36.104594 | orchestrator | Monday 02 June 2025 00:03:02 +0000 (0:00:11.275) 0:01:30.389 *********** 2025-06-02 00:03:36.104605 | orchestrator | 2025-06-02 00:03:36.104616 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 00:03:36.104626 | orchestrator | Monday 02 June 2025 00:03:02 +0000 (0:00:00.129) 0:01:30.518 *********** 2025-06-02 00:03:36.104637 | orchestrator | 2025-06-02 00:03:36.104648 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 00:03:36.104659 | orchestrator | Monday 02 June 2025 00:03:02 +0000 (0:00:00.129) 0:01:30.648 *********** 2025-06-02 00:03:36.104670 | orchestrator | 2025-06-02 00:03:36.104681 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 00:03:36.104692 | orchestrator | Monday 02 June 2025 00:03:02 +0000 (0:00:00.133) 0:01:30.781 *********** 2025-06-02 00:03:36.104781 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.104798 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:03:36.104809 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:03:36.104820 | orchestrator | 2025-06-02 00:03:36.104831 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 00:03:36.104842 | orchestrator | Monday 02 June 2025 00:03:15 +0000 (0:00:12.762) 0:01:43.543 *********** 2025-06-02 00:03:36.104853 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.104863 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:03:36.104874 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:03:36.104885 | orchestrator | 2025-06-02 00:03:36.104896 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 00:03:36.104907 | orchestrator | Monday 02 June 2025 00:03:26 +0000 (0:00:11.027) 0:01:54.571 *********** 2025-06-02 00:03:36.104926 | orchestrator | changed: [testbed-node-0] 2025-06-02 00:03:36.104937 | orchestrator | changed: [testbed-node-1] 2025-06-02 00:03:36.104948 | orchestrator | changed: [testbed-node-2] 2025-06-02 00:03:36.104959 | orchestrator | 2025-06-02 00:03:36.104970 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 00:03:36.104982 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 00:03:36.104995 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:03:36.105006 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 00:03:36.105017 | orchestrator | 2025-06-02 00:03:36.105028 | orchestrator | 2025-06-02 00:03:36.105039 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 00:03:36.105050 | orchestrator | Monday 02 June 2025 00:03:34 +0000 (0:00:08.240) 0:02:02.811 *********** 2025-06-02 00:03:36.105061 | orchestrator | =============================================================================== 2025-06-02 00:03:36.105072 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.31s 2025-06-02 00:03:36.105089 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.76s 2025-06-02 00:03:36.105101 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.28s 2025-06-02 00:03:36.105112 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.03s 2025-06-02 00:03:36.105123 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.85s 2025-06-02 00:03:36.105134 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.24s 2025-06-02 00:03:36.105145 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.40s 2025-06-02 00:03:36.105155 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.63s 2025-06-02 00:03:36.105166 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.09s 2025-06-02 00:03:36.105177 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.79s 2025-06-02 00:03:36.105188 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.45s 2025-06-02 00:03:36.105199 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.33s 2025-06-02 00:03:36.105210 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.15s 2025-06-02 00:03:36.105226 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.13s 2025-06-02 00:03:36.105238 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.85s 2025-06-02 00:03:36.105249 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.37s 2025-06-02 00:03:36.105260 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.13s 2025-06-02 00:03:36.105271 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.03s 2025-06-02 00:03:36.105281 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.71s 2025-06-02 00:03:36.105292 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.53s 2025-06-02 00:03:36.105303 | orchestrator | 2025-06-02 00:03:36 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:36.105314 | orchestrator | 2025-06-02 00:03:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:39.133105 | orchestrator | 2025-06-02 00:03:39 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:39.133208 | orchestrator | 2025-06-02 00:03:39 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:39.133251 | orchestrator | 2025-06-02 00:03:39 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:39.134604 | orchestrator | 2025-06-02 00:03:39 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:39.134631 | orchestrator | 2025-06-02 00:03:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:42.161946 | orchestrator | 2025-06-02 00:03:42 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:42.162114 | orchestrator | 2025-06-02 00:03:42 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:42.162145 | orchestrator | 2025-06-02 00:03:42 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:42.162840 | orchestrator | 2025-06-02 00:03:42 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:42.162867 | orchestrator | 2025-06-02 00:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:45.184686 | orchestrator | 2025-06-02 00:03:45 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:45.184807 | orchestrator | 2025-06-02 00:03:45 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:45.188286 | orchestrator | 2025-06-02 00:03:45 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:45.188900 | orchestrator | 2025-06-02 00:03:45 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:45.189001 | orchestrator | 2025-06-02 00:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:48.221360 | orchestrator | 2025-06-02 00:03:48 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:48.222729 | orchestrator | 2025-06-02 00:03:48 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:48.223552 | orchestrator | 2025-06-02 00:03:48 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:48.224337 | orchestrator | 2025-06-02 00:03:48 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:48.224366 | orchestrator | 2025-06-02 00:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:51.257355 | orchestrator | 2025-06-02 00:03:51 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:51.257528 | orchestrator | 2025-06-02 00:03:51 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:51.259495 | orchestrator | 2025-06-02 00:03:51 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:51.260179 | orchestrator | 2025-06-02 00:03:51 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:51.260413 | orchestrator | 2025-06-02 00:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:54.285569 | orchestrator | 2025-06-02 00:03:54 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:54.286326 | orchestrator | 2025-06-02 00:03:54 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:54.287811 | orchestrator | 2025-06-02 00:03:54 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:54.287834 | orchestrator | 2025-06-02 00:03:54 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:54.287849 | orchestrator | 2025-06-02 00:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:03:57.308925 | orchestrator | 2025-06-02 00:03:57 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:03:57.309239 | orchestrator | 2025-06-02 00:03:57 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:03:57.309942 | orchestrator | 2025-06-02 00:03:57 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:03:57.310466 | orchestrator | 2025-06-02 00:03:57 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:03:57.310476 | orchestrator | 2025-06-02 00:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:00.343529 | orchestrator | 2025-06-02 00:04:00 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:00.345386 | orchestrator | 2025-06-02 00:04:00 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:00.346268 | orchestrator | 2025-06-02 00:04:00 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:00.347239 | orchestrator | 2025-06-02 00:04:00 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:00.347252 | orchestrator | 2025-06-02 00:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:03.394494 | orchestrator | 2025-06-02 00:04:03 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:03.395428 | orchestrator | 2025-06-02 00:04:03 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:03.395459 | orchestrator | 2025-06-02 00:04:03 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:03.395470 | orchestrator | 2025-06-02 00:04:03 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:03.395481 | orchestrator | 2025-06-02 00:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:06.438540 | orchestrator | 2025-06-02 00:04:06 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:06.438986 | orchestrator | 2025-06-02 00:04:06 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:06.439737 | orchestrator | 2025-06-02 00:04:06 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:06.440726 | orchestrator | 2025-06-02 00:04:06 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:06.440953 | orchestrator | 2025-06-02 00:04:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:09.476208 | orchestrator | 2025-06-02 00:04:09 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:09.476591 | orchestrator | 2025-06-02 00:04:09 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:09.477484 | orchestrator | 2025-06-02 00:04:09 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:09.478156 | orchestrator | 2025-06-02 00:04:09 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:09.478183 | orchestrator | 2025-06-02 00:04:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:12.517983 | orchestrator | 2025-06-02 00:04:12 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:12.521455 | orchestrator | 2025-06-02 00:04:12 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:12.523148 | orchestrator | 2025-06-02 00:04:12 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:12.525452 | orchestrator | 2025-06-02 00:04:12 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:12.525527 | orchestrator | 2025-06-02 00:04:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:15.563325 | orchestrator | 2025-06-02 00:04:15 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:15.566069 | orchestrator | 2025-06-02 00:04:15 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:15.569068 | orchestrator | 2025-06-02 00:04:15 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:15.572110 | orchestrator | 2025-06-02 00:04:15 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:15.572225 | orchestrator | 2025-06-02 00:04:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:18.607867 | orchestrator | 2025-06-02 00:04:18 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state STARTED 2025-06-02 00:04:18.609180 | orchestrator | 2025-06-02 00:04:18 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:18.610074 | orchestrator | 2025-06-02 00:04:18 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:18.610722 | orchestrator | 2025-06-02 00:04:18 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:18.610874 | orchestrator | 2025-06-02 00:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:21.653350 | orchestrator | 2025-06-02 00:04:21 | INFO  | Task f50c1eb1-fa16-4374-9938-96d3e431e6e6 is in state SUCCESS 2025-06-02 00:04:21.655534 | orchestrator | 2025-06-02 00:04:21 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:21.659514 | orchestrator | 2025-06-02 00:04:21 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:21.662573 | orchestrator | 2025-06-02 00:04:21 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:21.662660 | orchestrator | 2025-06-02 00:04:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:24.707534 | orchestrator | 2025-06-02 00:04:24 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:24.711836 | orchestrator | 2025-06-02 00:04:24 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:24.712438 | orchestrator | 2025-06-02 00:04:24 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:24.713295 | orchestrator | 2025-06-02 00:04:24 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:24.713372 | orchestrator | 2025-06-02 00:04:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:27.751324 | orchestrator | 2025-06-02 00:04:27 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:27.752130 | orchestrator | 2025-06-02 00:04:27 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:27.753077 | orchestrator | 2025-06-02 00:04:27 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:27.754526 | orchestrator | 2025-06-02 00:04:27 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:27.754568 | orchestrator | 2025-06-02 00:04:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:30.786630 | orchestrator | 2025-06-02 00:04:30 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:30.787174 | orchestrator | 2025-06-02 00:04:30 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:30.787701 | orchestrator | 2025-06-02 00:04:30 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:30.788651 | orchestrator | 2025-06-02 00:04:30 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:30.790151 | orchestrator | 2025-06-02 00:04:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:33.836214 | orchestrator | 2025-06-02 00:04:33 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:33.837363 | orchestrator | 2025-06-02 00:04:33 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:33.840713 | orchestrator | 2025-06-02 00:04:33 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:33.841984 | orchestrator | 2025-06-02 00:04:33 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:33.842062 | orchestrator | 2025-06-02 00:04:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:36.893693 | orchestrator | 2025-06-02 00:04:36 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:36.893835 | orchestrator | 2025-06-02 00:04:36 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:36.893852 | orchestrator | 2025-06-02 00:04:36 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:36.895191 | orchestrator | 2025-06-02 00:04:36 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:36.895271 | orchestrator | 2025-06-02 00:04:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:39.950452 | orchestrator | 2025-06-02 00:04:39 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:39.950524 | orchestrator | 2025-06-02 00:04:39 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:39.951061 | orchestrator | 2025-06-02 00:04:39 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:39.951991 | orchestrator | 2025-06-02 00:04:39 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:39.952003 | orchestrator | 2025-06-02 00:04:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:42.986908 | orchestrator | 2025-06-02 00:04:42 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:42.988575 | orchestrator | 2025-06-02 00:04:42 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:42.990242 | orchestrator | 2025-06-02 00:04:42 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:42.991651 | orchestrator | 2025-06-02 00:04:42 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:42.991740 | orchestrator | 2025-06-02 00:04:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:46.031424 | orchestrator | 2025-06-02 00:04:46 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:46.031644 | orchestrator | 2025-06-02 00:04:46 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:46.032937 | orchestrator | 2025-06-02 00:04:46 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:46.033440 | orchestrator | 2025-06-02 00:04:46 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:46.033474 | orchestrator | 2025-06-02 00:04:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:49.089403 | orchestrator | 2025-06-02 00:04:49 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:49.091021 | orchestrator | 2025-06-02 00:04:49 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:49.095134 | orchestrator | 2025-06-02 00:04:49 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:49.098349 | orchestrator | 2025-06-02 00:04:49 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:49.098392 | orchestrator | 2025-06-02 00:04:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:52.133381 | orchestrator | 2025-06-02 00:04:52 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:52.135840 | orchestrator | 2025-06-02 00:04:52 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:52.138615 | orchestrator | 2025-06-02 00:04:52 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:52.140977 | orchestrator | 2025-06-02 00:04:52 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:52.141267 | orchestrator | 2025-06-02 00:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:55.185732 | orchestrator | 2025-06-02 00:04:55 | INFO  | Task b135a5f4-a278-420e-be42-77c1b8199e38 is in state STARTED 2025-06-02 00:04:55.188195 | orchestrator | 2025-06-02 00:04:55 | INFO  | Task 6e8a59fd-a405-4799-8e89-5dbeba441afb is in state STARTED 2025-06-02 00:04:55.188243 | orchestrator | 2025-06-02 00:04:55 | INFO  | Task 46ae15ef-6408-4095-b0c0-e4017efa90af is in state STARTED 2025-06-02 00:04:55.188257 | orchestrator | 2025-06-02 00:04:55 | INFO  | Task 34d52da3-c36e-40de-b700-20886da175d4 is in state STARTED 2025-06-02 00:04:55.188268 | orchestrator | 2025-06-02 00:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 00:04:57.161984 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 00:04:57.163426 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 00:04:57.920045 | 2025-06-02 00:04:57.920219 | PLAY [Post output play] 2025-06-02 00:04:57.937244 | 2025-06-02 00:04:57.937399 | LOOP [stage-output : Register sources] 2025-06-02 00:04:58.031704 | 2025-06-02 00:04:58.032125 | TASK [stage-output : Check sudo] 2025-06-02 00:04:58.965568 | orchestrator | sudo: a password is required 2025-06-02 00:04:59.076641 | orchestrator | ok: Runtime: 0:00:00.016972 2025-06-02 00:04:59.084384 | 2025-06-02 00:04:59.084509 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 00:04:59.129851 | 2025-06-02 00:04:59.130167 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 00:04:59.209595 | orchestrator | ok 2025-06-02 00:04:59.220213 | 2025-06-02 00:04:59.220415 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 00:04:59.774879 | orchestrator | ok: "docs" 2025-06-02 00:04:59.775181 | 2025-06-02 00:05:00.051309 | orchestrator | ok: "artifacts" 2025-06-02 00:05:00.346484 | orchestrator | ok: "logs" 2025-06-02 00:05:00.362243 | 2025-06-02 00:05:00.362417 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 00:05:00.399009 | 2025-06-02 00:05:00.399311 | TASK [stage-output : Make all log files readable] 2025-06-02 00:05:00.708009 | orchestrator | ok 2025-06-02 00:05:00.714767 | 2025-06-02 00:05:00.715007 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 00:05:00.751117 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:00.760646 | 2025-06-02 00:05:00.760810 | TASK [stage-output : Discover log files for compression] 2025-06-02 00:05:00.785433 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:00.793410 | 2025-06-02 00:05:00.793528 | LOOP [stage-output : Archive everything from logs] 2025-06-02 00:05:00.833009 | 2025-06-02 00:05:00.833180 | PLAY [Post cleanup play] 2025-06-02 00:05:00.841615 | 2025-06-02 00:05:00.841729 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 00:05:00.906432 | orchestrator | ok 2025-06-02 00:05:00.919807 | 2025-06-02 00:05:00.919939 | TASK [Set cloud fact (local deployment)] 2025-06-02 00:05:00.966018 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:00.982018 | 2025-06-02 00:05:00.982270 | TASK [Clean the cloud environment] 2025-06-02 00:05:01.804369 | orchestrator | 2025-06-02 00:05:01 - clean up servers 2025-06-02 00:05:02.556117 | orchestrator | 2025-06-02 00:05:02 - testbed-manager 2025-06-02 00:05:02.645534 | orchestrator | 2025-06-02 00:05:02 - testbed-node-4 2025-06-02 00:05:02.974543 | orchestrator | 2025-06-02 00:05:02 - testbed-node-3 2025-06-02 00:05:03.078953 | orchestrator | 2025-06-02 00:05:03 - testbed-node-5 2025-06-02 00:05:03.186607 | orchestrator | 2025-06-02 00:05:03 - testbed-node-0 2025-06-02 00:05:03.268610 | orchestrator | 2025-06-02 00:05:03 - testbed-node-2 2025-06-02 00:05:03.381791 | orchestrator | 2025-06-02 00:05:03 - testbed-node-1 2025-06-02 00:05:03.482494 | orchestrator | 2025-06-02 00:05:03 - clean up keypairs 2025-06-02 00:05:03.503417 | orchestrator | 2025-06-02 00:05:03 - testbed 2025-06-02 00:05:03.529538 | orchestrator | 2025-06-02 00:05:03 - wait for servers to be gone 2025-06-02 00:05:14.538340 | orchestrator | 2025-06-02 00:05:14 - clean up ports 2025-06-02 00:05:14.757872 | orchestrator | 2025-06-02 00:05:14 - 0eb37921-f7ea-44cd-8639-c1e87823bc32 2025-06-02 00:05:14.995876 | orchestrator | 2025-06-02 00:05:14 - 1a3da06a-9ecb-4999-bf0b-767a5868df84 2025-06-02 00:05:15.241906 | orchestrator | 2025-06-02 00:05:15 - 4b170823-6d6f-4fd3-a8fd-6d2f8d9003dd 2025-06-02 00:05:15.442924 | orchestrator | 2025-06-02 00:05:15 - 62b67b34-c2c4-4ae8-9c52-cdd0219fd9bc 2025-06-02 00:05:15.654102 | orchestrator | 2025-06-02 00:05:15 - 691d71bd-581a-49a4-aad1-d8a3907601bf 2025-06-02 00:05:15.869266 | orchestrator | 2025-06-02 00:05:15 - ab667968-e5f0-4289-9eb6-d18c95afb0a1 2025-06-02 00:05:16.252248 | orchestrator | 2025-06-02 00:05:16 - b1a38b67-b07f-4b45-9ff6-de2b62bd0d13 2025-06-02 00:05:16.451109 | orchestrator | 2025-06-02 00:05:16 - clean up volumes 2025-06-02 00:05:16.566315 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-2-node-base 2025-06-02 00:05:16.604410 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-5-node-base 2025-06-02 00:05:16.645360 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-3-node-base 2025-06-02 00:05:16.691159 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-0-node-base 2025-06-02 00:05:16.732620 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-1-node-base 2025-06-02 00:05:16.775185 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-manager-base 2025-06-02 00:05:16.819264 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-4-node-base 2025-06-02 00:05:16.866994 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-6-node-3 2025-06-02 00:05:16.914982 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-3-node-3 2025-06-02 00:05:16.956677 | orchestrator | 2025-06-02 00:05:16 - testbed-volume-5-node-5 2025-06-02 00:05:17.001214 | orchestrator | 2025-06-02 00:05:17 - testbed-volume-8-node-5 2025-06-02 00:05:17.042816 | orchestrator | 2025-06-02 00:05:17 - testbed-volume-1-node-4 2025-06-02 00:05:17.085792 | orchestrator | 2025-06-02 00:05:17 - testbed-volume-2-node-5 2025-06-02 00:05:17.127680 | orchestrator | 2025-06-02 00:05:17 - testbed-volume-0-node-3 2025-06-02 00:05:17.175585 | orchestrator | 2025-06-02 00:05:17 - testbed-volume-4-node-4 2025-06-02 00:05:17.216311 | orchestrator | 2025-06-02 00:05:17 - testbed-volume-7-node-4 2025-06-02 00:05:17.264196 | orchestrator | 2025-06-02 00:05:17 - disconnect routers 2025-06-02 00:05:17.380908 | orchestrator | 2025-06-02 00:05:17 - testbed 2025-06-02 00:05:18.265599 | orchestrator | 2025-06-02 00:05:18 - clean up subnets 2025-06-02 00:05:18.312889 | orchestrator | 2025-06-02 00:05:18 - subnet-testbed-management 2025-06-02 00:05:18.481326 | orchestrator | 2025-06-02 00:05:18 - clean up networks 2025-06-02 00:05:18.658529 | orchestrator | 2025-06-02 00:05:18 - net-testbed-management 2025-06-02 00:05:18.959243 | orchestrator | 2025-06-02 00:05:18 - clean up security groups 2025-06-02 00:05:19.001245 | orchestrator | 2025-06-02 00:05:19 - testbed-management 2025-06-02 00:05:19.115445 | orchestrator | 2025-06-02 00:05:19 - testbed-node 2025-06-02 00:05:19.222339 | orchestrator | 2025-06-02 00:05:19 - clean up floating ips 2025-06-02 00:05:19.261831 | orchestrator | 2025-06-02 00:05:19 - 81.163.193.143 2025-06-02 00:05:19.625638 | orchestrator | 2025-06-02 00:05:19 - clean up routers 2025-06-02 00:05:19.727312 | orchestrator | 2025-06-02 00:05:19 - testbed 2025-06-02 00:05:21.043306 | orchestrator | ok: Runtime: 0:00:19.218338 2025-06-02 00:05:21.049747 | 2025-06-02 00:05:21.049949 | PLAY RECAP 2025-06-02 00:05:21.050062 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 00:05:21.050114 | 2025-06-02 00:05:21.230429 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 00:05:21.233577 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 00:05:22.064036 | 2025-06-02 00:05:22.064225 | PLAY [Cleanup play] 2025-06-02 00:05:22.081098 | 2025-06-02 00:05:22.081257 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 00:05:22.149061 | orchestrator | ok 2025-06-02 00:05:22.158506 | 2025-06-02 00:05:22.158670 | TASK [Set cloud fact (local deployment)] 2025-06-02 00:05:22.193667 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:22.206892 | 2025-06-02 00:05:22.207042 | TASK [Clean the cloud environment] 2025-06-02 00:05:23.422250 | orchestrator | 2025-06-02 00:05:23 - clean up servers 2025-06-02 00:05:23.914966 | orchestrator | 2025-06-02 00:05:23 - clean up keypairs 2025-06-02 00:05:23.928784 | orchestrator | 2025-06-02 00:05:23 - wait for servers to be gone 2025-06-02 00:05:23.973711 | orchestrator | 2025-06-02 00:05:23 - clean up ports 2025-06-02 00:05:24.049246 | orchestrator | 2025-06-02 00:05:24 - clean up volumes 2025-06-02 00:05:24.112663 | orchestrator | 2025-06-02 00:05:24 - disconnect routers 2025-06-02 00:05:24.141613 | orchestrator | 2025-06-02 00:05:24 - clean up subnets 2025-06-02 00:05:24.165689 | orchestrator | 2025-06-02 00:05:24 - clean up networks 2025-06-02 00:05:24.339542 | orchestrator | 2025-06-02 00:05:24 - clean up security groups 2025-06-02 00:05:24.378089 | orchestrator | 2025-06-02 00:05:24 - clean up floating ips 2025-06-02 00:05:24.403768 | orchestrator | 2025-06-02 00:05:24 - clean up routers 2025-06-02 00:05:24.745056 | orchestrator | ok: Runtime: 0:00:01.382087 2025-06-02 00:05:24.749256 | 2025-06-02 00:05:24.749432 | PLAY RECAP 2025-06-02 00:05:24.749569 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 00:05:24.749641 | 2025-06-02 00:05:24.931401 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 00:05:24.932530 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 00:05:25.757987 | 2025-06-02 00:05:25.758203 | PLAY [Base post-fetch] 2025-06-02 00:05:25.775006 | 2025-06-02 00:05:25.775175 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 00:05:25.832807 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:25.847002 | 2025-06-02 00:05:25.847238 | TASK [fetch-output : Set log path for single node] 2025-06-02 00:05:25.903363 | orchestrator | ok 2025-06-02 00:05:25.911608 | 2025-06-02 00:05:25.911756 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 00:05:26.510479 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/work/logs" 2025-06-02 00:05:26.785847 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/work/artifacts" 2025-06-02 00:05:27.097172 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3bfa790e389247fa8ffbfba5f0ea409c/work/docs" 2025-06-02 00:05:27.124118 | 2025-06-02 00:05:27.124315 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 00:05:28.131045 | orchestrator | changed: .d..t...... ./ 2025-06-02 00:05:28.131395 | orchestrator | changed: All items complete 2025-06-02 00:05:28.131440 | 2025-06-02 00:05:28.899843 | orchestrator | changed: .d..t...... ./ 2025-06-02 00:05:29.670977 | orchestrator | changed: .d..t...... ./ 2025-06-02 00:05:29.700980 | 2025-06-02 00:05:29.701174 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 00:05:29.737440 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:29.741553 | orchestrator | skipping: Conditional result was False 2025-06-02 00:05:29.761552 | 2025-06-02 00:05:29.761687 | PLAY RECAP 2025-06-02 00:05:29.761767 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 00:05:29.761825 | 2025-06-02 00:05:29.956187 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 00:05:29.957494 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 00:05:30.758072 | 2025-06-02 00:05:30.758271 | PLAY [Base post] 2025-06-02 00:05:30.773586 | 2025-06-02 00:05:30.773731 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 00:05:31.788412 | orchestrator | changed 2025-06-02 00:05:31.798681 | 2025-06-02 00:05:31.798899 | PLAY RECAP 2025-06-02 00:05:31.798989 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 00:05:31.799070 | 2025-06-02 00:05:31.937386 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 00:05:31.939842 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 00:05:32.802994 | 2025-06-02 00:05:32.803212 | PLAY [Base post-logs] 2025-06-02 00:05:32.816124 | 2025-06-02 00:05:32.816289 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 00:05:33.324421 | localhost | changed 2025-06-02 00:05:33.343668 | 2025-06-02 00:05:33.343951 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 00:05:33.385528 | localhost | ok 2025-06-02 00:05:33.393189 | 2025-06-02 00:05:33.393346 | TASK [Set zuul-log-path fact] 2025-06-02 00:05:33.422626 | localhost | ok 2025-06-02 00:05:33.436893 | 2025-06-02 00:05:33.437353 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 00:05:33.475354 | localhost | ok 2025-06-02 00:05:33.478587 | 2025-06-02 00:05:33.478689 | TASK [upload-logs : Create log directories] 2025-06-02 00:05:34.049242 | localhost | changed 2025-06-02 00:05:34.053966 | 2025-06-02 00:05:34.054129 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 00:05:34.671857 | localhost -> localhost | ok: Runtime: 0:00:00.007297 2025-06-02 00:05:34.683480 | 2025-06-02 00:05:34.683762 | TASK [upload-logs : Upload logs to log server] 2025-06-02 00:05:35.315303 | localhost | Output suppressed because no_log was given 2025-06-02 00:05:35.319722 | 2025-06-02 00:05:35.320040 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 00:05:35.384539 | localhost | skipping: Conditional result was False 2025-06-02 00:05:35.389267 | localhost | skipping: Conditional result was False 2025-06-02 00:05:35.405398 | 2025-06-02 00:05:35.405683 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 00:05:35.467617 | localhost | skipping: Conditional result was False 2025-06-02 00:05:35.468249 | 2025-06-02 00:05:35.471715 | localhost | skipping: Conditional result was False 2025-06-02 00:05:35.481156 | 2025-06-02 00:05:35.481482 | LOOP [upload-logs : Upload console log and json output]