2025-02-04 08:40:42.399874 | Job console starting... 2025-02-04 08:40:42.421079 | Updating repositories 2025-02-04 08:40:42.492022 | Preparing job workspace 2025-02-04 08:40:44.448454 | Running Ansible setup... 2025-02-04 08:40:49.465184 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-02-04 08:40:50.205514 | 2025-02-04 08:40:50.205696 | PLAY [Base pre] 2025-02-04 08:40:50.239757 | 2025-02-04 08:40:50.239914 | TASK [Setup log path fact] 2025-02-04 08:40:50.273650 | orchestrator | ok 2025-02-04 08:40:50.296934 | 2025-02-04 08:40:50.297089 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-04 08:40:50.342434 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:50.353092 | 2025-02-04 08:40:50.353232 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-04 08:40:50.414202 | orchestrator | ok 2025-02-04 08:40:50.425673 | 2025-02-04 08:40:50.425816 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-04 08:40:50.471439 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:50.489004 | 2025-02-04 08:40:50.489159 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-04 08:40:50.525432 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:50.543587 | 2025-02-04 08:40:50.543760 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-04 08:40:50.569829 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:50.579474 | 2025-02-04 08:40:50.579602 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-04 08:40:50.604108 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:50.623486 | 2025-02-04 08:40:50.623607 | TASK [emit-job-header : Print job information] 2025-02-04 08:40:50.693940 | # Job Information 2025-02-04 08:40:50.694207 | Ansible Version: 2.15.3 2025-02-04 08:40:50.694264 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-02-04 08:40:50.694314 | Pipeline: post 2025-02-04 08:40:50.694351 | Executor: 7d211f194f6a 2025-02-04 08:40:50.694407 | Triggered by: https://github.com/osism/testbed/commit/a1e5a99bfb25cb0a321c640f1f5d766de90a11a6 2025-02-04 08:40:50.694443 | Event ID: b3a1f8f0-e2d3-11ef-983b-1b76d95e9de7 2025-02-04 08:40:50.704936 | 2025-02-04 08:40:50.705068 | LOOP [emit-job-header : Print node information] 2025-02-04 08:40:50.863599 | orchestrator | ok: 2025-02-04 08:40:50.863809 | orchestrator | # Node Information 2025-02-04 08:40:50.863848 | orchestrator | Inventory Hostname: orchestrator 2025-02-04 08:40:50.863878 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-02-04 08:40:50.863904 | orchestrator | Username: zuul-testbed05 2025-02-04 08:40:50.863928 | orchestrator | Distro: Debian 12.9 2025-02-04 08:40:50.863952 | orchestrator | Provider: static-testbed 2025-02-04 08:40:50.863974 | orchestrator | Label: testbed-orchestrator 2025-02-04 08:40:50.863997 | orchestrator | Product Name: OpenStack Nova 2025-02-04 08:40:50.864021 | orchestrator | Interface IP: 81.163.193.140 2025-02-04 08:40:50.895462 | 2025-02-04 08:40:50.895609 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-02-04 08:40:51.383567 | orchestrator -> localhost | changed 2025-02-04 08:40:51.393134 | 2025-02-04 08:40:51.393259 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-02-04 08:40:52.455029 | orchestrator -> localhost | changed 2025-02-04 08:40:52.483263 | 2025-02-04 08:40:52.483415 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-02-04 08:40:52.790608 | orchestrator -> localhost | ok 2025-02-04 08:40:52.812180 | 2025-02-04 08:40:52.812361 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-02-04 08:40:52.855024 | orchestrator | ok 2025-02-04 08:40:52.881828 | orchestrator | included: /var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-02-04 08:40:52.891159 | 2025-02-04 08:40:52.891277 | TASK [add-build-sshkey : Create Temp SSH key] 2025-02-04 08:40:53.597906 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-02-04 08:40:53.598346 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/work/6d1a90cbcc2642bb8f983473e166609b_id_rsa 2025-02-04 08:40:53.598493 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/work/6d1a90cbcc2642bb8f983473e166609b_id_rsa.pub 2025-02-04 08:40:53.598564 | orchestrator -> localhost | The key fingerprint is: 2025-02-04 08:40:53.598637 | orchestrator -> localhost | SHA256:mha42YhZVFRYSecsJpFIiQ21B1TNVynWzZ1AUGfsLfU zuul-build-sshkey 2025-02-04 08:40:53.598699 | orchestrator -> localhost | The key's randomart image is: 2025-02-04 08:40:53.598762 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-02-04 08:40:53.598848 | orchestrator -> localhost | | .B===O+..+=Bo+.| 2025-02-04 08:40:53.598901 | orchestrator -> localhost | | . ++o.+++ o =oo| 2025-02-04 08:40:53.598951 | orchestrator -> localhost | | o o ooo. ..o| 2025-02-04 08:40:53.598999 | orchestrator -> localhost | | . o o . ..E| 2025-02-04 08:40:53.599048 | orchestrator -> localhost | | o . S . | 2025-02-04 08:40:53.599095 | orchestrator -> localhost | | + = + | 2025-02-04 08:40:53.599144 | orchestrator -> localhost | | o + = | 2025-02-04 08:40:53.599194 | orchestrator -> localhost | | . | 2025-02-04 08:40:53.599242 | orchestrator -> localhost | | | 2025-02-04 08:40:53.599357 | orchestrator -> localhost | +----[SHA256]-----+ 2025-02-04 08:40:53.599896 | orchestrator -> localhost | ok: Runtime: 0:00:00.219904 2025-02-04 08:40:53.616521 | 2025-02-04 08:40:53.616653 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-02-04 08:40:53.655453 | orchestrator | ok 2025-02-04 08:40:53.669891 | orchestrator | included: /var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-02-04 08:40:53.680816 | 2025-02-04 08:40:53.680915 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-02-04 08:40:53.705320 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:53.714504 | 2025-02-04 08:40:53.714609 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-02-04 08:40:54.305743 | orchestrator | changed 2025-02-04 08:40:54.316758 | 2025-02-04 08:40:54.316886 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-02-04 08:40:54.615699 | orchestrator | ok 2025-02-04 08:40:54.626684 | 2025-02-04 08:40:54.626810 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-02-04 08:40:55.088357 | orchestrator | ok 2025-02-04 08:40:55.095776 | 2025-02-04 08:40:55.095891 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-02-04 08:40:55.528195 | orchestrator | ok 2025-02-04 08:40:55.538232 | 2025-02-04 08:40:55.538338 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-02-04 08:40:55.574142 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:55.593549 | 2025-02-04 08:40:55.593708 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-02-04 08:40:56.020667 | orchestrator -> localhost | changed 2025-02-04 08:40:56.037043 | 2025-02-04 08:40:56.037173 | TASK [add-build-sshkey : Add back temp key] 2025-02-04 08:40:56.422946 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/work/6d1a90cbcc2642bb8f983473e166609b_id_rsa (zuul-build-sshkey) 2025-02-04 08:40:56.423179 | orchestrator -> localhost | ok: Runtime: 0:00:00.011712 2025-02-04 08:40:56.431891 | 2025-02-04 08:40:56.432008 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-02-04 08:40:56.848509 | orchestrator | ok 2025-02-04 08:40:56.857276 | 2025-02-04 08:40:56.857464 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-02-04 08:40:56.894098 | orchestrator | skipping: Conditional result was False 2025-02-04 08:40:56.917092 | 2025-02-04 08:40:56.917213 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-02-04 08:40:57.336304 | orchestrator | ok 2025-02-04 08:40:57.355499 | 2025-02-04 08:40:57.355622 | TASK [validate-host : Define zuul_info_dir fact] 2025-02-04 08:40:57.388132 | orchestrator | ok 2025-02-04 08:40:57.395556 | 2025-02-04 08:40:57.395654 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-02-04 08:40:57.738241 | orchestrator -> localhost | ok 2025-02-04 08:40:57.747881 | 2025-02-04 08:40:57.748003 | TASK [validate-host : Collect information about the host] 2025-02-04 08:40:58.991588 | orchestrator | ok 2025-02-04 08:40:59.010359 | 2025-02-04 08:40:59.010527 | TASK [validate-host : Sanitize hostname] 2025-02-04 08:40:59.095239 | orchestrator | ok 2025-02-04 08:40:59.107514 | 2025-02-04 08:40:59.107725 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-02-04 08:40:59.759269 | orchestrator -> localhost | changed 2025-02-04 08:40:59.771446 | 2025-02-04 08:40:59.771591 | TASK [validate-host : Collect information about zuul worker] 2025-02-04 08:41:00.311392 | orchestrator | ok 2025-02-04 08:41:00.320484 | 2025-02-04 08:41:00.320623 | TASK [validate-host : Write out all zuul information for each host] 2025-02-04 08:41:00.921816 | orchestrator -> localhost | changed 2025-02-04 08:41:00.946828 | 2025-02-04 08:41:00.946979 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-02-04 08:41:01.246558 | orchestrator | ok 2025-02-04 08:41:01.257698 | 2025-02-04 08:41:01.257834 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-02-04 08:41:45.383961 | orchestrator | changed: 2025-02-04 08:41:45.384084 | orchestrator | .d..t...... src/ 2025-02-04 08:41:45.384111 | orchestrator | .d..t...... src/github.com/ 2025-02-04 08:41:45.384130 | orchestrator | .d..t...... src/github.com/osism/ 2025-02-04 08:41:45.384147 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-02-04 08:41:45.384164 | orchestrator | RedHat.yml 2025-02-04 08:41:45.407254 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-02-04 08:41:45.407272 | orchestrator | RedHat.yml 2025-02-04 08:41:45.407333 | orchestrator | = 2.2.0"... 2025-02-04 08:42:00.893450 | orchestrator | 08:42:00.893 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-02-04 08:42:00.947956 | orchestrator | 08:42:00.947 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-02-04 08:42:02.317728 | orchestrator | 08:42:02.317 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-02-04 08:42:03.381319 | orchestrator | 08:42:03.381 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-02-04 08:42:04.346077 | orchestrator | 08:42:04.345 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-02-04 08:42:05.391791 | orchestrator | 08:42:05.391 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-02-04 08:42:06.260761 | orchestrator | 08:42:06.260 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-02-04 08:42:07.067663 | orchestrator | 08:42:07.067 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-02-04 08:42:07.067761 | orchestrator | 08:42:07.067 STDOUT terraform: Providers are signed by their developers. 2025-02-04 08:42:07.067783 | orchestrator | 08:42:07.067 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-02-04 08:42:07.067937 | orchestrator | 08:42:07.067 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-02-04 08:42:07.068016 | orchestrator | 08:42:07.067 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-02-04 08:42:07.068037 | orchestrator | 08:42:07.067 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-02-04 08:42:07.068458 | orchestrator | 08:42:07.067 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-02-04 08:42:07.068467 | orchestrator | 08:42:07.067 STDOUT terraform: you run "tofu init" in the future. 2025-02-04 08:42:07.068475 | orchestrator | 08:42:07.068 STDOUT terraform: OpenTofu has been successfully initialized! 2025-02-04 08:42:07.068550 | orchestrator | 08:42:07.068 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-02-04 08:42:07.068601 | orchestrator | 08:42:07.068 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-02-04 08:42:07.068661 | orchestrator | 08:42:07.068 STDOUT terraform: should now work. 2025-02-04 08:42:07.068670 | orchestrator | 08:42:07.068 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-02-04 08:42:07.068707 | orchestrator | 08:42:07.068 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-02-04 08:42:07.068749 | orchestrator | 08:42:07.068 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-02-04 08:42:07.506705 | orchestrator | 08:42:07.505 STDOUT terraform: Created and switched to workspace "ci"! 2025-02-04 08:42:07.735387 | orchestrator | 08:42:07.506 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-02-04 08:42:07.735473 | orchestrator | 08:42:07.506 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-02-04 08:42:07.735485 | orchestrator | 08:42:07.506 STDOUT terraform: for this configuration. 2025-02-04 08:42:07.735507 | orchestrator | 08:42:07.735 STDOUT terraform: ci.auto.tfvars 2025-02-04 08:42:10.470580 | orchestrator | 08:42:10.470 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-02-04 08:42:11.052990 | orchestrator | 08:42:11.052 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-02-04 08:42:11.351405 | orchestrator | 08:42:11.351 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-02-04 08:42:11.351467 | orchestrator | 08:42:11.351 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-02-04 08:42:11.351476 | orchestrator | 08:42:11.351 STDOUT terraform:  + create 2025-02-04 08:42:11.351486 | orchestrator | 08:42:11.351 STDOUT terraform:  <= read (data resources) 2025-02-04 08:42:11.351519 | orchestrator | 08:42:11.351 STDOUT terraform: OpenTofu will perform the following actions: 2025-02-04 08:42:11.351527 | orchestrator | 08:42:11.351 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-02-04 08:42:11.351535 | orchestrator | 08:42:11.351 STDOUT terraform:  # (config refers to values not yet known) 2025-02-04 08:42:11.351578 | orchestrator | 08:42:11.351 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-02-04 08:42:11.351586 | orchestrator | 08:42:11.351 STDOUT terraform:  + checksum = (known after apply) 2025-02-04 08:42:11.351622 | orchestrator | 08:42:11.351 STDOUT terraform:  + created_at = (known after apply) 2025-02-04 08:42:11.351654 | orchestrator | 08:42:11.351 STDOUT terraform:  + file = (known after apply) 2025-02-04 08:42:11.351689 | orchestrator | 08:42:11.351 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.351720 | orchestrator | 08:42:11.351 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.351754 | orchestrator | 08:42:11.351 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-04 08:42:11.351786 | orchestrator | 08:42:11.351 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-04 08:42:11.351808 | orchestrator | 08:42:11.351 STDOUT terraform:  + most_recent = true 2025-02-04 08:42:11.351856 | orchestrator | 08:42:11.351 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.351911 | orchestrator | 08:42:11.351 STDOUT terraform:  + protected = (known after apply) 2025-02-04 08:42:11.351937 | orchestrator | 08:42:11.351 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.351974 | orchestrator | 08:42:11.351 STDOUT terraform:  + schema = (known after apply) 2025-02-04 08:42:11.352004 | orchestrator | 08:42:11.351 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-04 08:42:11.352037 | orchestrator | 08:42:11.351 STDOUT terraform:  + tags = (known after apply) 2025-02-04 08:42:11.352074 | orchestrator | 08:42:11.352 STDOUT terraform:  + updated_at = (known after apply) 2025-02-04 08:42:11.352082 | orchestrator | 08:42:11.352 STDOUT terraform:  } 2025-02-04 08:42:11.352134 | orchestrator | 08:42:11.352 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-02-04 08:42:11.352179 | orchestrator | 08:42:11.352 STDOUT terraform:  # (config refers to values not yet known) 2025-02-04 08:42:11.352211 | orchestrator | 08:42:11.352 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-02-04 08:42:11.352241 | orchestrator | 08:42:11.352 STDOUT terraform:  + checksum = (known after apply) 2025-02-04 08:42:11.352275 | orchestrator | 08:42:11.352 STDOUT terraform:  + created_at = (known after apply) 2025-02-04 08:42:11.352307 | orchestrator | 08:42:11.352 STDOUT terraform:  + file = (known after apply) 2025-02-04 08:42:11.352349 | orchestrator | 08:42:11.352 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.352376 | orchestrator | 08:42:11.352 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.352407 | orchestrator | 08:42:11.352 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-02-04 08:42:11.352439 | orchestrator | 08:42:11.352 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-02-04 08:42:11.352460 | orchestrator | 08:42:11.352 STDOUT terraform:  + most_recent = true 2025-02-04 08:42:11.352493 | orchestrator | 08:42:11.352 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.352529 | orchestrator | 08:42:11.352 STDOUT terraform:  + protected = (known after apply) 2025-02-04 08:42:11.352554 | orchestrator | 08:42:11.352 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.352587 | orchestrator | 08:42:11.352 STDOUT terraform:  + schema = (known after apply) 2025-02-04 08:42:11.352621 | orchestrator | 08:42:11.352 STDOUT terraform:  + size_bytes = (known after apply) 2025-02-04 08:42:11.352654 | orchestrator | 08:42:11.352 STDOUT terraform:  + tags = (known after apply) 2025-02-04 08:42:11.352686 | orchestrator | 08:42:11.352 STDOUT terraform:  + updated_at = (known after apply) 2025-02-04 08:42:11.352702 | orchestrator | 08:42:11.352 STDOUT terraform:  } 2025-02-04 08:42:11.352734 | orchestrator | 08:42:11.352 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-02-04 08:42:11.352763 | orchestrator | 08:42:11.352 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-02-04 08:42:11.352805 | orchestrator | 08:42:11.352 STDOUT terraform:  + content = (known after apply) 2025-02-04 08:42:11.352860 | orchestrator | 08:42:11.352 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-04 08:42:11.352896 | orchestrator | 08:42:11.352 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-04 08:42:11.353027 | orchestrator | 08:42:11.352 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-04 08:42:11.353102 | orchestrator | 08:42:11.352 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-04 08:42:11.353116 | orchestrator | 08:42:11.352 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-04 08:42:11.353154 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-04 08:42:11.353163 | orchestrator | 08:42:11.353 STDOUT terraform:  + directory_permission = "0777" 2025-02-04 08:42:11.353170 | orchestrator | 08:42:11.353 STDOUT terraform:  + file_permission = "0644" 2025-02-04 08:42:11.353190 | orchestrator | 08:42:11.353 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-02-04 08:42:11.353227 | orchestrator | 08:42:11.353 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.353235 | orchestrator | 08:42:11.353 STDOUT terraform:  } 2025-02-04 08:42:11.353244 | orchestrator | 08:42:11.353 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-02-04 08:42:11.353292 | orchestrator | 08:42:11.353 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-02-04 08:42:11.353302 | orchestrator | 08:42:11.353 STDOUT terraform:  + content = (known after apply) 2025-02-04 08:42:11.353340 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-04 08:42:11.353380 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-04 08:42:11.353422 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-04 08:42:11.353463 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-04 08:42:11.353503 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-04 08:42:11.353540 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-04 08:42:11.353565 | orchestrator | 08:42:11.353 STDOUT terraform:  + directory_permission = "0777" 2025-02-04 08:42:11.353589 | orchestrator | 08:42:11.353 STDOUT terraform:  + file_permission = "0644" 2025-02-04 08:42:11.353626 | orchestrator | 08:42:11.353 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-02-04 08:42:11.353667 | orchestrator | 08:42:11.353 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.353676 | orchestrator | 08:42:11.353 STDOUT terraform:  } 2025-02-04 08:42:11.353704 | orchestrator | 08:42:11.353 STDOUT terraform:  # local_file.inventory will be created 2025-02-04 08:42:11.353743 | orchestrator | 08:42:11.353 STDOUT terraform:  + resource "local_file" "inventory" { 2025-02-04 08:42:11.353773 | orchestrator | 08:42:11.353 STDOUT terraform:  + content = (known after apply) 2025-02-04 08:42:11.353812 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-04 08:42:11.353881 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-04 08:42:11.353929 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-04 08:42:11.353969 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-04 08:42:11.354009 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-04 08:42:11.354062 | orchestrator | 08:42:11.353 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-04 08:42:11.354083 | orchestrator | 08:42:11.354 STDOUT terraform:  + directory_permission = "0777" 2025-02-04 08:42:11.354112 | orchestrator | 08:42:11.354 STDOUT terraform:  + file_permission = "0644" 2025-02-04 08:42:11.354147 | orchestrator | 08:42:11.354 STDOUT terraform:  + filename = "inventory.ci" 2025-02-04 08:42:11.354195 | orchestrator | 08:42:11.354 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.354229 | orchestrator | 08:42:11.354 STDOUT terraform:  } 2025-02-04 08:42:11.354239 | orchestrator | 08:42:11.354 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-02-04 08:42:11.354265 | orchestrator | 08:42:11.354 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-02-04 08:42:11.354304 | orchestrator | 08:42:11.354 STDOUT terraform:  + content = (sensitive value) 2025-02-04 08:42:11.354342 | orchestrator | 08:42:11.354 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-02-04 08:42:11.354384 | orchestrator | 08:42:11.354 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-02-04 08:42:11.354453 | orchestrator | 08:42:11.354 STDOUT terraform:  + content_md5 = (known after apply) 2025-02-04 08:42:11.354465 | orchestrator | 08:42:11.354 STDOUT terraform:  + content_sha1 = (known after apply) 2025-02-04 08:42:11.354521 | orchestrator | 08:42:11.354 STDOUT terraform:  + content_sha256 = (known after apply) 2025-02-04 08:42:11.354544 | orchestrator | 08:42:11.354 STDOUT terraform:  + content_sha512 = (known after apply) 2025-02-04 08:42:11.354571 | orchestrator | 08:42:11.354 STDOUT terraform:  + directory_permission = "0700" 2025-02-04 08:42:11.354604 | orchestrator | 08:42:11.354 STDOUT terraform:  + file_permission = "0600" 2025-02-04 08:42:11.354632 | orchestrator | 08:42:11.354 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-02-04 08:42:11.354672 | orchestrator | 08:42:11.354 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.354681 | orchestrator | 08:42:11.354 STDOUT terraform:  } 2025-02-04 08:42:11.354716 | orchestrator | 08:42:11.354 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-02-04 08:42:11.354749 | orchestrator | 08:42:11.354 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-02-04 08:42:11.354772 | orchestrator | 08:42:11.354 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.354781 | orchestrator | 08:42:11.354 STDOUT terraform:  } 2025-02-04 08:42:11.354899 | orchestrator | 08:42:11.354 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-02-04 08:42:11.354952 | orchestrator | 08:42:11.354 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-02-04 08:42:11.354987 | orchestrator | 08:42:11.354 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.355009 | orchestrator | 08:42:11.354 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.355045 | orchestrator | 08:42:11.355 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.355080 | orchestrator | 08:42:11.355 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.355116 | orchestrator | 08:42:11.355 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.355160 | orchestrator | 08:42:11.355 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-02-04 08:42:11.355198 | orchestrator | 08:42:11.355 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.355222 | orchestrator | 08:42:11.355 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.355245 | orchestrator | 08:42:11.355 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.355262 | orchestrator | 08:42:11.355 STDOUT terraform:  } 2025-02-04 08:42:11.355311 | orchestrator | 08:42:11.355 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-02-04 08:42:11.355365 | orchestrator | 08:42:11.355 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-04 08:42:11.355401 | orchestrator | 08:42:11.355 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.355422 | orchestrator | 08:42:11.355 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.355458 | orchestrator | 08:42:11.355 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.355493 | orchestrator | 08:42:11.355 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.355528 | orchestrator | 08:42:11.355 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.355573 | orchestrator | 08:42:11.355 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-02-04 08:42:11.355609 | orchestrator | 08:42:11.355 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.355631 | orchestrator | 08:42:11.355 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.355653 | orchestrator | 08:42:11.355 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.355662 | orchestrator | 08:42:11.355 STDOUT terraform:  } 2025-02-04 08:42:11.355718 | orchestrator | 08:42:11.355 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-02-04 08:42:11.355770 | orchestrator | 08:42:11.355 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-04 08:42:11.355804 | orchestrator | 08:42:11.355 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.355827 | orchestrator | 08:42:11.355 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.355876 | orchestrator | 08:42:11.355 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.355910 | orchestrator | 08:42:11.355 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.355946 | orchestrator | 08:42:11.355 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.355991 | orchestrator | 08:42:11.355 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-02-04 08:42:11.356029 | orchestrator | 08:42:11.355 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.356049 | orchestrator | 08:42:11.356 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.356071 | orchestrator | 08:42:11.356 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.356080 | orchestrator | 08:42:11.356 STDOUT terraform:  } 2025-02-04 08:42:11.356135 | orchestrator | 08:42:11.356 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-02-04 08:42:11.356187 | orchestrator | 08:42:11.356 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-04 08:42:11.356221 | orchestrator | 08:42:11.356 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.356243 | orchestrator | 08:42:11.356 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.356279 | orchestrator | 08:42:11.356 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.356316 | orchestrator | 08:42:11.356 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.356351 | orchestrator | 08:42:11.356 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.356395 | orchestrator | 08:42:11.356 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-02-04 08:42:11.356430 | orchestrator | 08:42:11.356 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.356453 | orchestrator | 08:42:11.356 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.356489 | orchestrator | 08:42:11.356 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.356613 | orchestrator | 08:42:11.356 STDOUT terraform:  } 2025-02-04 08:42:11.356641 | orchestrator | 08:42:11.356 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-02-04 08:42:11.356687 | orchestrator | 08:42:11.356 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-04 08:42:11.356694 | orchestrator | 08:42:11.356 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.356700 | orchestrator | 08:42:11.356 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.356707 | orchestrator | 08:42:11.356 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.356742 | orchestrator | 08:42:11.356 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.356757 | orchestrator | 08:42:11.356 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.356787 | orchestrator | 08:42:11.356 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-02-04 08:42:11.356822 | orchestrator | 08:42:11.356 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.356868 | orchestrator | 08:42:11.356 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.356875 | orchestrator | 08:42:11.356 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.356882 | orchestrator | 08:42:11.356 STDOUT terraform:  } 2025-02-04 08:42:11.356950 | orchestrator | 08:42:11.356 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-02-04 08:42:11.356986 | orchestrator | 08:42:11.356 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-04 08:42:11.357032 | orchestrator | 08:42:11.356 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.357040 | orchestrator | 08:42:11.357 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.357077 | orchestrator | 08:42:11.357 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.357117 | orchestrator | 08:42:11.357 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.357145 | orchestrator | 08:42:11.357 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.357201 | orchestrator | 08:42:11.357 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-02-04 08:42:11.357227 | orchestrator | 08:42:11.357 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.357249 | orchestrator | 08:42:11.357 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.357274 | orchestrator | 08:42:11.357 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.357330 | orchestrator | 08:42:11.357 STDOUT terraform:  } 2025-02-04 08:42:11.357338 | orchestrator | 08:42:11.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-02-04 08:42:11.357384 | orchestrator | 08:42:11.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-02-04 08:42:11.357419 | orchestrator | 08:42:11.357 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.357456 | orchestrator | 08:42:11.357 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.357476 | orchestrator | 08:42:11.357 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.357513 | orchestrator | 08:42:11.357 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.357548 | orchestrator | 08:42:11.357 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.357592 | orchestrator | 08:42:11.357 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-02-04 08:42:11.357629 | orchestrator | 08:42:11.357 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.357648 | orchestrator | 08:42:11.357 STDOUT terraform:  + size = 80 2025-02-04 08:42:11.357671 | orchestrator | 08:42:11.357 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.357679 | orchestrator | 08:42:11.357 STDOUT terraform:  } 2025-02-04 08:42:11.357733 | orchestrator | 08:42:11.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-02-04 08:42:11.357786 | orchestrator | 08:42:11.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.357820 | orchestrator | 08:42:11.357 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.357868 | orchestrator | 08:42:11.357 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.357905 | orchestrator | 08:42:11.357 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.357952 | orchestrator | 08:42:11.357 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.357980 | orchestrator | 08:42:11.357 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-02-04 08:42:11.358028 | orchestrator | 08:42:11.357 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.359021 | orchestrator | 08:42:11.358 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.359059 | orchestrator | 08:42:11.359 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.359077 | orchestrator | 08:42:11.359 STDOUT terraform:  } 2025-02-04 08:42:11.359140 | orchestrator | 08:42:11.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-02-04 08:42:11.359197 | orchestrator | 08:42:11.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.359249 | orchestrator | 08:42:11.359 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.359274 | orchestrator | 08:42:11.359 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.359325 | orchestrator | 08:42:11.359 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.359359 | orchestrator | 08:42:11.359 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.359417 | orchestrator | 08:42:11.359 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-02-04 08:42:11.359468 | orchestrator | 08:42:11.359 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.359488 | orchestrator | 08:42:11.359 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.359512 | orchestrator | 08:42:11.359 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.359542 | orchestrator | 08:42:11.359 STDOUT terraform:  } 2025-02-04 08:42:11.359591 | orchestrator | 08:42:11.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-02-04 08:42:11.359655 | orchestrator | 08:42:11.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.359703 | orchestrator | 08:42:11.359 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.359724 | orchestrator | 08:42:11.359 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.359774 | orchestrator | 08:42:11.359 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.359809 | orchestrator | 08:42:11.359 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.359877 | orchestrator | 08:42:11.359 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-02-04 08:42:11.359932 | orchestrator | 08:42:11.359 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.359939 | orchestrator | 08:42:11.359 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.359968 | orchestrator | 08:42:11.359 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.359975 | orchestrator | 08:42:11.359 STDOUT terraform:  } 2025-02-04 08:42:11.360041 | orchestrator | 08:42:11.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-02-04 08:42:11.360110 | orchestrator | 08:42:11.360 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.360162 | orchestrator | 08:42:11.360 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.360169 | orchestrator | 08:42:11.360 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.360208 | orchestrator | 08:42:11.360 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.360255 | orchestrator | 08:42:11.360 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.360312 | orchestrator | 08:42:11.360 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-02-04 08:42:11.360362 | orchestrator | 08:42:11.360 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.360387 | orchestrator | 08:42:11.360 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.360410 | orchestrator | 08:42:11.360 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.360417 | orchestrator | 08:42:11.360 STDOUT terraform:  } 2025-02-04 08:42:11.360485 | orchestrator | 08:42:11.360 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-02-04 08:42:11.360548 | orchestrator | 08:42:11.360 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.360582 | orchestrator | 08:42:11.360 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.360627 | orchestrator | 08:42:11.360 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.360655 | orchestrator | 08:42:11.360 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.360703 | orchestrator | 08:42:11.360 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.360746 | orchestrator | 08:42:11.360 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-02-04 08:42:11.360795 | orchestrator | 08:42:11.360 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.360817 | orchestrator | 08:42:11.360 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.360936 | orchestrator | 08:42:11.360 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.361421 | orchestrator | 08:42:11.360 STDOUT terraform:  } 2025-02-04 08:42:11.361444 | orchestrator | 08:42:11.360 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-02-04 08:42:11.361465 | orchestrator | 08:42:11.361 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.361520 | orchestrator | 08:42:11.361 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.361528 | orchestrator | 08:42:11.361 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.361574 | orchestrator | 08:42:11.361 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.361619 | orchestrator | 08:42:11.361 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.361687 | orchestrator | 08:42:11.361 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-02-04 08:42:11.361706 | orchestrator | 08:42:11.361 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.361742 | orchestrator | 08:42:11.361 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.361766 | orchestrator | 08:42:11.361 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.361773 | orchestrator | 08:42:11.361 STDOUT terraform:  } 2025-02-04 08:42:11.362029 | orchestrator | 08:42:11.361 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-02-04 08:42:11.362118 | orchestrator | 08:42:11.362 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.362142 | orchestrator | 08:42:11.362 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.362167 | orchestrator | 08:42:11.362 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.362223 | orchestrator | 08:42:11.362 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.362253 | orchestrator | 08:42:11.362 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.362317 | orchestrator | 08:42:11.362 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-02-04 08:42:11.362360 | orchestrator | 08:42:11.362 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.362381 | orchestrator | 08:42:11.362 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.362407 | orchestrator | 08:42:11.362 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.362432 | orchestrator | 08:42:11.362 STDOUT terraform:  } 2025-02-04 08:42:11.362484 | orchestrator | 08:42:11.362 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-02-04 08:42:11.362555 | orchestrator | 08:42:11.362 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.362915 | orchestrator | 08:42:11.362 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.362925 | orchestrator | 08:42:11.362 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.362990 | orchestrator | 08:42:11.362 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.363014 | orchestrator | 08:42:11.362 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.363080 | orchestrator | 08:42:11.363 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-02-04 08:42:11.363137 | orchestrator | 08:42:11.363 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.363144 | orchestrator | 08:42:11.363 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.363165 | orchestrator | 08:42:11.363 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.363172 | orchestrator | 08:42:11.363 STDOUT terraform:  } 2025-02-04 08:42:11.363247 | orchestrator | 08:42:11.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-02-04 08:42:11.363302 | orchestrator | 08:42:11.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.363356 | orchestrator | 08:42:11.363 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.363380 | orchestrator | 08:42:11.363 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.363430 | orchestrator | 08:42:11.363 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.363465 | orchestrator | 08:42:11.363 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.363521 | orchestrator | 08:42:11.363 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-02-04 08:42:11.363556 | orchestrator | 08:42:11.363 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.363591 | orchestrator | 08:42:11.363 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.363615 | orchestrator | 08:42:11.363 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.363622 | orchestrator | 08:42:11.363 STDOUT terraform:  } 2025-02-04 08:42:11.363755 | orchestrator | 08:42:11.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-02-04 08:42:11.363813 | orchestrator | 08:42:11.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.363855 | orchestrator | 08:42:11.363 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.363891 | orchestrator | 08:42:11.363 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.363926 | orchestrator | 08:42:11.363 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.363961 | orchestrator | 08:42:11.363 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.364019 | orchestrator | 08:42:11.363 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-02-04 08:42:11.364056 | orchestrator | 08:42:11.364 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.364074 | orchestrator | 08:42:11.364 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.364092 | orchestrator | 08:42:11.364 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.364099 | orchestrator | 08:42:11.364 STDOUT terraform:  } 2025-02-04 08:42:11.364155 | orchestrator | 08:42:11.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-02-04 08:42:11.364210 | orchestrator | 08:42:11.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.364485 | orchestrator | 08:42:11.364 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.364495 | orchestrator | 08:42:11.364 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.364545 | orchestrator | 08:42:11.364 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.364567 | orchestrator | 08:42:11.364 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.364620 | orchestrator | 08:42:11.364 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-02-04 08:42:11.364647 | orchestrator | 08:42:11.364 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.364670 | orchestrator | 08:42:11.364 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.364690 | orchestrator | 08:42:11.364 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.364697 | orchestrator | 08:42:11.364 STDOUT terraform:  } 2025-02-04 08:42:11.364757 | orchestrator | 08:42:11.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-02-04 08:42:11.364802 | orchestrator | 08:42:11.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.364863 | orchestrator | 08:42:11.364 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.364882 | orchestrator | 08:42:11.364 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.364919 | orchestrator | 08:42:11.364 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.364954 | orchestrator | 08:42:11.364 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.364998 | orchestrator | 08:42:11.364 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-02-04 08:42:11.365033 | orchestrator | 08:42:11.364 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.365058 | orchestrator | 08:42:11.365 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.365076 | orchestrator | 08:42:11.365 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.365083 | orchestrator | 08:42:11.365 STDOUT terraform:  } 2025-02-04 08:42:11.365137 | orchestrator | 08:42:11.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-02-04 08:42:11.365187 | orchestrator | 08:42:11.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.365222 | orchestrator | 08:42:11.365 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.365496 | orchestrator | 08:42:11.365 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.365786 | orchestrator | 08:42:11.365 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.365845 | orchestrator | 08:42:11.365 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.365922 | orchestrator | 08:42:11.365 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-02-04 08:42:11.365956 | orchestrator | 08:42:11.365 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.365981 | orchestrator | 08:42:11.365 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.366006 | orchestrator | 08:42:11.365 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.366028 | orchestrator | 08:42:11.366 STDOUT terraform:  } 2025-02-04 08:42:11.366113 | orchestrator | 08:42:11.366 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-02-04 08:42:11.366165 | orchestrator | 08:42:11.366 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.366193 | orchestrator | 08:42:11.366 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.366221 | orchestrator | 08:42:11.366 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.366258 | orchestrator | 08:42:11.366 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.366299 | orchestrator | 08:42:11.366 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.366345 | orchestrator | 08:42:11.366 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-02-04 08:42:11.366383 | orchestrator | 08:42:11.366 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.366411 | orchestrator | 08:42:11.366 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.366443 | orchestrator | 08:42:11.366 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.366450 | orchestrator | 08:42:11.366 STDOUT terraform:  } 2025-02-04 08:42:11.366511 | orchestrator | 08:42:11.366 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-02-04 08:42:11.366563 | orchestrator | 08:42:11.366 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.366605 | orchestrator | 08:42:11.366 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.366623 | orchestrator | 08:42:11.366 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.366664 | orchestrator | 08:42:11.366 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.366700 | orchestrator | 08:42:11.366 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.366749 | orchestrator | 08:42:11.366 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-02-04 08:42:11.374118 | orchestrator | 08:42:11.366 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.374166 | orchestrator | 08:42:11.371 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.374176 | orchestrator | 08:42:11.371 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.374184 | orchestrator | 08:42:11.371 STDOUT terraform:  } 2025-02-04 08:42:11.374192 | orchestrator | 08:42:11.371 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-02-04 08:42:11.374215 | orchestrator | 08:42:11.371 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.374227 | orchestrator | 08:42:11.371 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.374238 | orchestrator | 08:42:11.372 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.374250 | orchestrator | 08:42:11.372 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.374267 | orchestrator | 08:42:11.372 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.374281 | orchestrator | 08:42:11.372 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-02-04 08:42:11.374293 | orchestrator | 08:42:11.372 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.374305 | orchestrator | 08:42:11.372 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.374319 | orchestrator | 08:42:11.372 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.374331 | orchestrator | 08:42:11.372 STDOUT terraform:  } 2025-02-04 08:42:11.374344 | orchestrator | 08:42:11.372 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-02-04 08:42:11.374357 | orchestrator | 08:42:11.372 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.374370 | orchestrator | 08:42:11.372 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.374384 | orchestrator | 08:42:11.372 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.374397 | orchestrator | 08:42:11.372 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.374408 | orchestrator | 08:42:11.372 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.374420 | orchestrator | 08:42:11.372 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-02-04 08:42:11.374431 | orchestrator | 08:42:11.372 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.374443 | orchestrator | 08:42:11.372 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.374454 | orchestrator | 08:42:11.372 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.374465 | orchestrator | 08:42:11.372 STDOUT terraform:  } 2025-02-04 08:42:11.374476 | orchestrator | 08:42:11.372 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-02-04 08:42:11.374487 | orchestrator | 08:42:11.372 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-02-04 08:42:11.374498 | orchestrator | 08:42:11.372 STDOUT terraform:  + attachment = (known after apply) 2025-02-04 08:42:11.374508 | orchestrator | 08:42:11.372 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.374519 | orchestrator | 08:42:11.372 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.374529 | orchestrator | 08:42:11.372 STDOUT terraform:  + metadata = (known after apply) 2025-02-04 08:42:11.374539 | orchestrator | 08:42:11.372 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-02-04 08:42:11.374546 | orchestrator | 08:42:11.372 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.374562 | orchestrator | 08:42:11.372 STDOUT terraform:  + size = 20 2025-02-04 08:42:11.374568 | orchestrator | 08:42:11.372 STDOUT terraform:  + volume_type = "ssd" 2025-02-04 08:42:11.374575 | orchestrator | 08:42:11.372 STDOUT terraform:  } 2025-02-04 08:42:11.374585 | orchestrator | 08:42:11.372 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-02-04 08:42:11.374603 | orchestrator | 08:42:11.372 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-02-04 08:42:11.374975 | orchestrator | 08:42:11.373 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.374992 | orchestrator | 08:42:11.373 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.374998 | orchestrator | 08:42:11.373 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.375005 | orchestrator | 08:42:11.373 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.375011 | orchestrator | 08:42:11.373 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.375018 | orchestrator | 08:42:11.373 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.375024 | orchestrator | 08:42:11.373 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.375031 | orchestrator | 08:42:11.373 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.375042 | orchestrator | 08:42:11.373 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-02-04 08:42:11.375049 | orchestrator | 08:42:11.373 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.375055 | orchestrator | 08:42:11.373 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.375062 | orchestrator | 08:42:11.373 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.375068 | orchestrator | 08:42:11.373 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.375074 | orchestrator | 08:42:11.373 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.375081 | orchestrator | 08:42:11.373 STDOUT terraform:  + name = "testbed-manager" 2025-02-04 08:42:11.375087 | orchestrator | 08:42:11.373 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.375093 | orchestrator | 08:42:11.373 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.375099 | orchestrator | 08:42:11.373 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.375106 | orchestrator | 08:42:11.373 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.375112 | orchestrator | 08:42:11.373 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.375121 | orchestrator | 08:42:11.373 STDOUT terraform:  + user_data = (known after apply) 2025-02-04 08:42:11.375128 | orchestrator | 08:42:11.373 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.375139 | orchestrator | 08:42:11.374 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.375146 | orchestrator | 08:42:11.374 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.375152 | orchestrator | 08:42:11.374 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.375167 | orchestrator | 08:42:11.374 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.375174 | orchestrator | 08:42:11.375 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.375180 | orchestrator | 08:42:11.375 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.375190 | orchestrator | 08:42:11.375 STDOUT terraform:  } 2025-02-04 08:42:11.375197 | orchestrator | 08:42:11.375 STDOUT terraform:  + network { 2025-02-04 08:42:11.375206 | orchestrator | 08:42:11.375 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.378062 | orchestrator | 08:42:11.375 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.378101 | orchestrator | 08:42:11.375 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.378116 | orchestrator | 08:42:11.375 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.378123 | orchestrator | 08:42:11.375 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.378129 | orchestrator | 08:42:11.375 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.378136 | orchestrator | 08:42:11.375 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.378142 | orchestrator | 08:42:11.375 STDOUT terraform:  } 2025-02-04 08:42:11.378148 | orchestrator | 08:42:11.375 STDOUT terraform:  } 2025-02-04 08:42:11.378154 | orchestrator | 08:42:11.375 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-02-04 08:42:11.378160 | orchestrator | 08:42:11.375 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-04 08:42:11.378166 | orchestrator | 08:42:11.375 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.378172 | orchestrator | 08:42:11.375 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.378178 | orchestrator | 08:42:11.375 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.378185 | orchestrator | 08:42:11.375 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.378190 | orchestrator | 08:42:11.375 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.378195 | orchestrator | 08:42:11.375 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.378200 | orchestrator | 08:42:11.375 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.378206 | orchestrator | 08:42:11.375 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.378212 | orchestrator | 08:42:11.375 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-04 08:42:11.378217 | orchestrator | 08:42:11.375 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.378223 | orchestrator | 08:42:11.375 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.378228 | orchestrator | 08:42:11.375 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.378233 | orchestrator | 08:42:11.375 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.378238 | orchestrator | 08:42:11.375 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.378254 | orchestrator | 08:42:11.376 STDOUT terraform:  + name = "testbed-node-0" 2025-02-04 08:42:11.378259 | orchestrator | 08:42:11.376 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.378264 | orchestrator | 08:42:11.376 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.378270 | orchestrator | 08:42:11.376 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.378275 | orchestrator | 08:42:11.376 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.378281 | orchestrator | 08:42:11.376 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.378286 | orchestrator | 08:42:11.376 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-04 08:42:11.378292 | orchestrator | 08:42:11.376 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.378297 | orchestrator | 08:42:11.376 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.378308 | orchestrator | 08:42:11.376 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.378313 | orchestrator | 08:42:11.376 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.378319 | orchestrator | 08:42:11.376 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.378324 | orchestrator | 08:42:11.376 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.378330 | orchestrator | 08:42:11.376 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.378335 | orchestrator | 08:42:11.376 STDOUT terraform:  } 2025-02-04 08:42:11.378340 | orchestrator | 08:42:11.376 STDOUT terraform:  + network { 2025-02-04 08:42:11.378351 | orchestrator | 08:42:11.376 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.378357 | orchestrator | 08:42:11.376 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.378362 | orchestrator | 08:42:11.376 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.378368 | orchestrator | 08:42:11.376 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.378373 | orchestrator | 08:42:11.376 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.378381 | orchestrator | 08:42:11.376 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.378387 | orchestrator | 08:42:11.376 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.378392 | orchestrator | 08:42:11.376 STDOUT terraform:  } 2025-02-04 08:42:11.378398 | orchestrator | 08:42:11.376 STDOUT terraform:  } 2025-02-04 08:42:11.378404 | orchestrator | 08:42:11.376 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-02-04 08:42:11.378409 | orchestrator | 08:42:11.376 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-04 08:42:11.378414 | orchestrator | 08:42:11.376 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.378419 | orchestrator | 08:42:11.376 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.378425 | orchestrator | 08:42:11.376 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.378434 | orchestrator | 08:42:11.376 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.378441 | orchestrator | 08:42:11.376 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.378447 | orchestrator | 08:42:11.377 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.378452 | orchestrator | 08:42:11.377 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.378458 | orchestrator | 08:42:11.377 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.378466 | orchestrator | 08:42:11.377 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-04 08:42:11.378472 | orchestrator | 08:42:11.377 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.378477 | orchestrator | 08:42:11.377 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.378482 | orchestrator | 08:42:11.377 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.378488 | orchestrator | 08:42:11.377 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.378493 | orchestrator | 08:42:11.377 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.378499 | orchestrator | 08:42:11.377 STDOUT terraform:  + name = "testbed-node-1" 2025-02-04 08:42:11.378504 | orchestrator | 08:42:11.377 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.378509 | orchestrator | 08:42:11.377 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.378514 | orchestrator | 08:42:11.377 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.378520 | orchestrator | 08:42:11.377 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.378526 | orchestrator | 08:42:11.377 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.378531 | orchestrator | 08:42:11.377 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-04 08:42:11.378536 | orchestrator | 08:42:11.377 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.378542 | orchestrator | 08:42:11.377 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.378547 | orchestrator | 08:42:11.377 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.378552 | orchestrator | 08:42:11.377 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.378557 | orchestrator | 08:42:11.377 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.378566 | orchestrator | 08:42:11.377 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.380804 | orchestrator | 08:42:11.377 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.380834 | orchestrator | 08:42:11.377 STDOUT terraform:  } 2025-02-04 08:42:11.380855 | orchestrator | 08:42:11.377 STDOUT terraform:  + network { 2025-02-04 08:42:11.380860 | orchestrator | 08:42:11.377 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.380866 | orchestrator | 08:42:11.377 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.380871 | orchestrator | 08:42:11.377 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.380885 | orchestrator | 08:42:11.377 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.380890 | orchestrator | 08:42:11.377 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.380895 | orchestrator | 08:42:11.377 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.380900 | orchestrator | 08:42:11.377 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.380905 | orchestrator | 08:42:11.377 STDOUT terraform:  } 2025-02-04 08:42:11.380911 | orchestrator | 08:42:11.377 STDOUT terraform:  } 2025-02-04 08:42:11.380916 | orchestrator | 08:42:11.378 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-02-04 08:42:11.380921 | orchestrator | 08:42:11.378 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-04 08:42:11.380925 | orchestrator | 08:42:11.378 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.380930 | orchestrator | 08:42:11.378 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.380939 | orchestrator | 08:42:11.378 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.380945 | orchestrator | 08:42:11.378 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.380950 | orchestrator | 08:42:11.378 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.380955 | orchestrator | 08:42:11.378 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.380960 | orchestrator | 08:42:11.378 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.380965 | orchestrator | 08:42:11.378 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.380969 | orchestrator | 08:42:11.378 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-04 08:42:11.380975 | orchestrator | 08:42:11.378 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.380979 | orchestrator | 08:42:11.378 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.380984 | orchestrator | 08:42:11.378 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.380989 | orchestrator | 08:42:11.378 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.381000 | orchestrator | 08:42:11.378 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.381005 | orchestrator | 08:42:11.378 STDOUT terraform:  + name = "testbed-node-2" 2025-02-04 08:42:11.381010 | orchestrator | 08:42:11.378 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.381015 | orchestrator | 08:42:11.378 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.381019 | orchestrator | 08:42:11.378 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.381024 | orchestrator | 08:42:11.378 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.381029 | orchestrator | 08:42:11.378 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.381034 | orchestrator | 08:42:11.378 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-04 08:42:11.381040 | orchestrator | 08:42:11.378 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.381048 | orchestrator | 08:42:11.378 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.381053 | orchestrator | 08:42:11.378 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.381057 | orchestrator | 08:42:11.378 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.381062 | orchestrator | 08:42:11.378 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.381067 | orchestrator | 08:42:11.378 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.381072 | orchestrator | 08:42:11.378 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.381077 | orchestrator | 08:42:11.379 STDOUT terraform:  } 2025-02-04 08:42:11.381082 | orchestrator | 08:42:11.379 STDOUT terraform:  + network { 2025-02-04 08:42:11.381087 | orchestrator | 08:42:11.379 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.381092 | orchestrator | 08:42:11.379 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.381097 | orchestrator | 08:42:11.379 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.381102 | orchestrator | 08:42:11.379 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.381106 | orchestrator | 08:42:11.379 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.381111 | orchestrator | 08:42:11.379 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.381116 | orchestrator | 08:42:11.379 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.381121 | orchestrator | 08:42:11.379 STDOUT terraform:  } 2025-02-04 08:42:11.381126 | orchestrator | 08:42:11.379 STDOUT terraform:  } 2025-02-04 08:42:11.381131 | orchestrator | 08:42:11.379 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-02-04 08:42:11.381136 | orchestrator | 08:42:11.379 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-04 08:42:11.381141 | orchestrator | 08:42:11.379 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.381146 | orchestrator | 08:42:11.379 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.381151 | orchestrator | 08:42:11.379 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.381156 | orchestrator | 08:42:11.379 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.381160 | orchestrator | 08:42:11.379 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.381165 | orchestrator | 08:42:11.379 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.381170 | orchestrator | 08:42:11.379 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.381175 | orchestrator | 08:42:11.379 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.381186 | orchestrator | 08:42:11.379 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-04 08:42:11.381191 | orchestrator | 08:42:11.379 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.381201 | orchestrator | 08:42:11.379 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.381214 | orchestrator | 08:42:11.379 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.381219 | orchestrator | 08:42:11.379 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.381224 | orchestrator | 08:42:11.379 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.381229 | orchestrator | 08:42:11.379 STDOUT terraform:  + name = "testbed-node-3" 2025-02-04 08:42:11.381234 | orchestrator | 08:42:11.379 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.381239 | orchestrator | 08:42:11.379 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.381244 | orchestrator | 08:42:11.379 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.381249 | orchestrator | 08:42:11.380 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.381254 | orchestrator | 08:42:11.380 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.381259 | orchestrator | 08:42:11.380 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-04 08:42:11.381264 | orchestrator | 08:42:11.380 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.381269 | orchestrator | 08:42:11.380 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.381274 | orchestrator | 08:42:11.380 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.381279 | orchestrator | 08:42:11.380 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.381284 | orchestrator | 08:42:11.380 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.381288 | orchestrator | 08:42:11.380 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.381309 | orchestrator | 08:42:11.380 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.381314 | orchestrator | 08:42:11.380 STDOUT terraform:  } 2025-02-04 08:42:11.381319 | orchestrator | 08:42:11.380 STDOUT terraform:  + network { 2025-02-04 08:42:11.381324 | orchestrator | 08:42:11.380 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.381329 | orchestrator | 08:42:11.380 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.381334 | orchestrator | 08:42:11.380 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.381338 | orchestrator | 08:42:11.380 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.381343 | orchestrator | 08:42:11.380 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.381348 | orchestrator | 08:42:11.380 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.381353 | orchestrator | 08:42:11.380 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.381358 | orchestrator | 08:42:11.380 STDOUT terraform:  } 2025-02-04 08:42:11.381363 | orchestrator | 08:42:11.380 STDOUT terraform:  } 2025-02-04 08:42:11.381368 | orchestrator | 08:42:11.380 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-02-04 08:42:11.381373 | orchestrator | 08:42:11.380 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-04 08:42:11.381378 | orchestrator | 08:42:11.380 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.381388 | orchestrator | 08:42:11.380 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.381392 | orchestrator | 08:42:11.380 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.381397 | orchestrator | 08:42:11.380 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.381402 | orchestrator | 08:42:11.380 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.381407 | orchestrator | 08:42:11.380 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.381412 | orchestrator | 08:42:11.380 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.381420 | orchestrator | 08:42:11.380 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.381452 | orchestrator | 08:42:11.380 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-04 08:42:11.381458 | orchestrator | 08:42:11.380 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.381463 | orchestrator | 08:42:11.380 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.381467 | orchestrator | 08:42:11.381 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.381472 | orchestrator | 08:42:11.381 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.381477 | orchestrator | 08:42:11.381 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.381482 | orchestrator | 08:42:11.381 STDOUT terraform:  + name = "testbed-node-4" 2025-02-04 08:42:11.381487 | orchestrator | 08:42:11.381 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.381492 | orchestrator | 08:42:11.381 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.381497 | orchestrator | 08:42:11.381 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.381502 | orchestrator | 08:42:11.381 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.381507 | orchestrator | 08:42:11.381 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.381512 | orchestrator | 08:42:11.381 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-04 08:42:11.381517 | orchestrator | 08:42:11.381 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.381522 | orchestrator | 08:42:11.381 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.381528 | orchestrator | 08:42:11.381 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.381534 | orchestrator | 08:42:11.381 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.381539 | orchestrator | 08:42:11.381 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.381545 | orchestrator | 08:42:11.381 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.381580 | orchestrator | 08:42:11.381 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.381595 | orchestrator | 08:42:11.381 STDOUT terraform:  } 2025-02-04 08:42:11.381611 | orchestrator | 08:42:11.381 STDOUT terraform:  + network { 2025-02-04 08:42:11.381634 | orchestrator | 08:42:11.381 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.381674 | orchestrator | 08:42:11.381 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.381706 | orchestrator | 08:42:11.381 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.381742 | orchestrator | 08:42:11.381 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.381778 | orchestrator | 08:42:11.381 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.381815 | orchestrator | 08:42:11.381 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.381877 | orchestrator | 08:42:11.381 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.381884 | orchestrator | 08:42:11.381 STDOUT terraform:  } 2025-02-04 08:42:11.381891 | orchestrator | 08:42:11.381 STDOUT terraform:  } 2025-02-04 08:42:11.381997 | orchestrator | 08:42:11.381 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-02-04 08:42:11.382065 | orchestrator | 08:42:11.381 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-02-04 08:42:11.382106 | orchestrator | 08:42:11.382 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-02-04 08:42:11.382147 | orchestrator | 08:42:11.382 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-02-04 08:42:11.382188 | orchestrator | 08:42:11.382 STDOUT terraform:  + all_metadata = (known after apply) 2025-02-04 08:42:11.382229 | orchestrator | 08:42:11.382 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.382252 | orchestrator | 08:42:11.382 STDOUT terraform:  + availability_zone = "nova" 2025-02-04 08:42:11.382277 | orchestrator | 08:42:11.382 STDOUT terraform:  + config_drive = true 2025-02-04 08:42:11.382321 | orchestrator | 08:42:11.382 STDOUT terraform:  + created = (known after apply) 2025-02-04 08:42:11.382363 | orchestrator | 08:42:11.382 STDOUT terraform:  + flavor_id = (known after apply) 2025-02-04 08:42:11.382400 | orchestrator | 08:42:11.382 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-02-04 08:42:11.382423 | orchestrator | 08:42:11.382 STDOUT terraform:  + force_delete = false 2025-02-04 08:42:11.382466 | orchestrator | 08:42:11.382 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.382507 | orchestrator | 08:42:11.382 STDOUT terraform:  + image_id = (known after apply) 2025-02-04 08:42:11.382549 | orchestrator | 08:42:11.382 STDOUT terraform:  + image_name = (known after apply) 2025-02-04 08:42:11.382580 | orchestrator | 08:42:11.382 STDOUT terraform:  + key_pair = "testbed" 2025-02-04 08:42:11.382615 | orchestrator | 08:42:11.382 STDOUT terraform:  + name = "testbed-node-5" 2025-02-04 08:42:11.382646 | orchestrator | 08:42:11.382 STDOUT terraform:  + power_state = "active" 2025-02-04 08:42:11.382688 | orchestrator | 08:42:11.382 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.382728 | orchestrator | 08:42:11.382 STDOUT terraform:  + security_groups = (known after apply) 2025-02-04 08:42:11.382753 | orchestrator | 08:42:11.382 STDOUT terraform:  + stop_before_destroy = false 2025-02-04 08:42:11.382796 | orchestrator | 08:42:11.382 STDOUT terraform:  + updated = (known after apply) 2025-02-04 08:42:11.382874 | orchestrator | 08:42:11.382 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-02-04 08:42:11.382892 | orchestrator | 08:42:11.382 STDOUT terraform:  + block_device { 2025-02-04 08:42:11.382921 | orchestrator | 08:42:11.382 STDOUT terraform:  + boot_index = 0 2025-02-04 08:42:11.382953 | orchestrator | 08:42:11.382 STDOUT terraform:  + delete_on_termination = false 2025-02-04 08:42:11.382988 | orchestrator | 08:42:11.382 STDOUT terraform:  + destination_type = "volume" 2025-02-04 08:42:11.383021 | orchestrator | 08:42:11.382 STDOUT terraform:  + multiattach = false 2025-02-04 08:42:11.383056 | orchestrator | 08:42:11.383 STDOUT terraform:  + source_type = "volume" 2025-02-04 08:42:11.383101 | orchestrator | 08:42:11.383 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.383108 | orchestrator | 08:42:11.383 STDOUT terraform:  } 2025-02-04 08:42:11.383126 | orchestrator | 08:42:11.383 STDOUT terraform:  + network { 2025-02-04 08:42:11.383150 | orchestrator | 08:42:11.383 STDOUT terraform:  + access_network = false 2025-02-04 08:42:11.383188 | orchestrator | 08:42:11.383 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-02-04 08:42:11.383224 | orchestrator | 08:42:11.383 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-02-04 08:42:11.383261 | orchestrator | 08:42:11.383 STDOUT terraform:  + mac = (known after apply) 2025-02-04 08:42:11.383297 | orchestrator | 08:42:11.383 STDOUT terraform:  + name = (known after apply) 2025-02-04 08:42:11.383332 | orchestrator | 08:42:11.383 STDOUT terraform:  + port = (known after apply) 2025-02-04 08:42:11.383370 | orchestrator | 08:42:11.383 STDOUT terraform:  + uuid = (known after apply) 2025-02-04 08:42:11.383376 | orchestrator | 08:42:11.383 STDOUT terraform:  } 2025-02-04 08:42:11.383392 | orchestrator | 08:42:11.383 STDOUT terraform:  } 2025-02-04 08:42:11.383436 | orchestrator | 08:42:11.383 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-02-04 08:42:11.383472 | orchestrator | 08:42:11.383 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-02-04 08:42:11.383503 | orchestrator | 08:42:11.383 STDOUT terraform:  + fingerprint = (known after apply) 2025-02-04 08:42:11.383537 | orchestrator | 08:42:11.383 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.383564 | orchestrator | 08:42:11.383 STDOUT terraform:  + name = "testbed" 2025-02-04 08:42:11.383592 | orchestrator | 08:42:11.383 STDOUT terraform:  + private_key = (sensitive value) 2025-02-04 08:42:11.383626 | orchestrator | 08:42:11.383 STDOUT terraform:  + public_key = (known after apply) 2025-02-04 08:42:11.383658 | orchestrator | 08:42:11.383 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.383691 | orchestrator | 08:42:11.383 STDOUT terraform:  + user_id = (known after apply) 2025-02-04 08:42:11.383698 | orchestrator | 08:42:11.383 STDOUT terraform:  } 2025-02-04 08:42:11.383759 | orchestrator | 08:42:11.383 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-02-04 08:42:11.383817 | orchestrator | 08:42:11.383 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.383873 | orchestrator | 08:42:11.383 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.383907 | orchestrator | 08:42:11.383 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.383940 | orchestrator | 08:42:11.383 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.383972 | orchestrator | 08:42:11.383 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.384007 | orchestrator | 08:42:11.383 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.384017 | orchestrator | 08:42:11.384 STDOUT terraform:  } 2025-02-04 08:42:11.384076 | orchestrator | 08:42:11.384 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-02-04 08:42:11.384134 | orchestrator | 08:42:11.384 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.384168 | orchestrator | 08:42:11.384 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.384204 | orchestrator | 08:42:11.384 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.384236 | orchestrator | 08:42:11.384 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.384269 | orchestrator | 08:42:11.384 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.384306 | orchestrator | 08:42:11.384 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.384314 | orchestrator | 08:42:11.384 STDOUT terraform:  } 2025-02-04 08:42:11.384372 | orchestrator | 08:42:11.384 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-02-04 08:42:11.384431 | orchestrator | 08:42:11.384 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.384466 | orchestrator | 08:42:11.384 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.385123 | orchestrator | 08:42:11.384 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.385169 | orchestrator | 08:42:11.384 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.385176 | orchestrator | 08:42:11.384 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.385182 | orchestrator | 08:42:11.384 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.385187 | orchestrator | 08:42:11.384 STDOUT terraform:  } 2025-02-04 08:42:11.385192 | orchestrator | 08:42:11.384 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-02-04 08:42:11.385198 | orchestrator | 08:42:11.384 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.385203 | orchestrator | 08:42:11.384 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.385208 | orchestrator | 08:42:11.384 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.385224 | orchestrator | 08:42:11.384 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.385230 | orchestrator | 08:42:11.384 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.385245 | orchestrator | 08:42:11.384 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.385251 | orchestrator | 08:42:11.384 STDOUT terraform:  } 2025-02-04 08:42:11.385256 | orchestrator | 08:42:11.384 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-02-04 08:42:11.385260 | orchestrator | 08:42:11.384 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.385265 | orchestrator | 08:42:11.384 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.385270 | orchestrator | 08:42:11.385 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.385275 | orchestrator | 08:42:11.385 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.385280 | orchestrator | 08:42:11.385 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.385288 | orchestrator | 08:42:11.385 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.385307 | orchestrator | 08:42:11.385 STDOUT terraform:  } 2025-02-04 08:42:11.385313 | orchestrator | 08:42:11.385 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-02-04 08:42:11.385334 | orchestrator | 08:42:11.385 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.385340 | orchestrator | 08:42:11.385 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.385347 | orchestrator | 08:42:11.385 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.385381 | orchestrator | 08:42:11.385 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.385390 | orchestrator | 08:42:11.385 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.385416 | orchestrator | 08:42:11.385 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.385423 | orchestrator | 08:42:11.385 STDOUT terraform:  } 2025-02-04 08:42:11.385486 | orchestrator | 08:42:11.385 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-02-04 08:42:11.385545 | orchestrator | 08:42:11.385 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.385578 | orchestrator | 08:42:11.385 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.385612 | orchestrator | 08:42:11.385 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.385644 | orchestrator | 08:42:11.385 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.385676 | orchestrator | 08:42:11.385 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.385710 | orchestrator | 08:42:11.385 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.385718 | orchestrator | 08:42:11.385 STDOUT terraform:  } 2025-02-04 08:42:11.385778 | orchestrator | 08:42:11.385 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-02-04 08:42:11.385834 | orchestrator | 08:42:11.385 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.385882 | orchestrator | 08:42:11.385 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.385915 | orchestrator | 08:42:11.385 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.385947 | orchestrator | 08:42:11.385 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.385988 | orchestrator | 08:42:11.385 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.386035 | orchestrator | 08:42:11.385 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.386095 | orchestrator | 08:42:11.386 STDOUT terraform:  } 2025-02-04 08:42:11.386103 | orchestrator | 08:42:11.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-02-04 08:42:11.386155 | orchestrator | 08:42:11.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.386203 | orchestrator | 08:42:11.386 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.386245 | orchestrator | 08:42:11.386 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.386278 | orchestrator | 08:42:11.386 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.386313 | orchestrator | 08:42:11.386 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.386345 | orchestrator | 08:42:11.386 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.386353 | orchestrator | 08:42:11.386 STDOUT terraform:  } 2025-02-04 08:42:11.386413 | orchestrator | 08:42:11.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-02-04 08:42:11.386474 | orchestrator | 08:42:11.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.386507 | orchestrator | 08:42:11.386 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.386540 | orchestrator | 08:42:11.386 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.386573 | orchestrator | 08:42:11.386 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.386606 | orchestrator | 08:42:11.386 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.386641 | orchestrator | 08:42:11.386 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.386648 | orchestrator | 08:42:11.386 STDOUT terraform:  } 2025-02-04 08:42:11.386714 | orchestrator | 08:42:11.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-02-04 08:42:11.386771 | orchestrator | 08:42:11.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.386807 | orchestrator | 08:42:11.386 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.386881 | orchestrator | 08:42:11.386 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.386891 | orchestrator | 08:42:11.386 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.386926 | orchestrator | 08:42:11.386 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.386959 | orchestrator | 08:42:11.386 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.386967 | orchestrator | 08:42:11.386 STDOUT terraform:  } 2025-02-04 08:42:11.387029 | orchestrator | 08:42:11.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-02-04 08:42:11.387086 | orchestrator | 08:42:11.387 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.387118 | orchestrator | 08:42:11.387 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.387151 | orchestrator | 08:42:11.387 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.387187 | orchestrator | 08:42:11.387 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.387219 | orchestrator | 08:42:11.387 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.387252 | orchestrator | 08:42:11.387 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.387259 | orchestrator | 08:42:11.387 STDOUT terraform:  } 2025-02-04 08:42:11.387322 | orchestrator | 08:42:11.387 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-02-04 08:42:11.387381 | orchestrator | 08:42:11.387 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.387413 | orchestrator | 08:42:11.387 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.387446 | orchestrator | 08:42:11.387 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.387479 | orchestrator | 08:42:11.387 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.387511 | orchestrator | 08:42:11.387 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.387543 | orchestrator | 08:42:11.387 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.387551 | orchestrator | 08:42:11.387 STDOUT terraform:  } 2025-02-04 08:42:11.387628 | orchestrator | 08:42:11.387 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-02-04 08:42:11.387689 | orchestrator | 08:42:11.387 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.387724 | orchestrator | 08:42:11.387 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.387761 | orchestrator | 08:42:11.387 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.387790 | orchestrator | 08:42:11.387 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.387822 | orchestrator | 08:42:11.387 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.387869 | orchestrator | 08:42:11.387 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.387877 | orchestrator | 08:42:11.387 STDOUT terraform:  } 2025-02-04 08:42:11.387939 | orchestrator | 08:42:11.387 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-02-04 08:42:11.387997 | orchestrator | 08:42:11.387 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.388030 | orchestrator | 08:42:11.387 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.388063 | orchestrator | 08:42:11.388 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.388099 | orchestrator | 08:42:11.388 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.388132 | orchestrator | 08:42:11.388 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.388166 | orchestrator | 08:42:11.388 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.388176 | orchestrator | 08:42:11.388 STDOUT terraform:  } 2025-02-04 08:42:11.388236 | orchestrator | 08:42:11.388 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-02-04 08:42:11.388295 | orchestrator | 08:42:11.388 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.388327 | orchestrator | 08:42:11.388 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.388360 | orchestrator | 08:42:11.388 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.388393 | orchestrator | 08:42:11.388 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.388426 | orchestrator | 08:42:11.388 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.388463 | orchestrator | 08:42:11.388 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.388470 | orchestrator | 08:42:11.388 STDOUT terraform:  } 2025-02-04 08:42:11.388535 | orchestrator | 08:42:11.388 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-02-04 08:42:11.388592 | orchestrator | 08:42:11.388 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.388628 | orchestrator | 08:42:11.388 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.388662 | orchestrator | 08:42:11.388 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.388694 | orchestrator | 08:42:11.388 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.388726 | orchestrator | 08:42:11.388 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.388762 | orchestrator | 08:42:11.388 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.388769 | orchestrator | 08:42:11.388 STDOUT terraform:  } 2025-02-04 08:42:11.388835 | orchestrator | 08:42:11.388 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-02-04 08:42:11.388922 | orchestrator | 08:42:11.388 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-02-04 08:42:11.388954 | orchestrator | 08:42:11.388 STDOUT terraform:  + device = (known after apply) 2025-02-04 08:42:11.388989 | orchestrator | 08:42:11.388 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.389020 | orchestrator | 08:42:11.388 STDOUT terraform:  + instance_id = (known after apply) 2025-02-04 08:42:11.389053 | orchestrator | 08:42:11.389 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.389086 | orchestrator | 08:42:11.389 STDOUT terraform:  + volume_id = (known after apply) 2025-02-04 08:42:11.389094 | orchestrator | 08:42:11.389 STDOUT terraform:  } 2025-02-04 08:42:11.389163 | orchestrator | 08:42:11.389 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-02-04 08:42:11.389232 | orchestrator | 08:42:11.389 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-02-04 08:42:11.389263 | orchestrator | 08:42:11.389 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-04 08:42:11.389306 | orchestrator | 08:42:11.389 STDOUT terraform:  + floating_ip = (known after apply) 2025-02-04 08:42:11.389338 | orchestrator | 08:42:11.389 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.389370 | orchestrator | 08:42:11.389 STDOUT terraform:  + port_id = (known after apply) 2025-02-04 08:42:11.389407 | orchestrator | 08:42:11.389 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.389415 | orchestrator | 08:42:11.389 STDOUT terraform:  } 2025-02-04 08:42:11.389470 | orchestrator | 08:42:11.389 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-02-04 08:42:11.389525 | orchestrator | 08:42:11.389 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-02-04 08:42:11.389553 | orchestrator | 08:42:11.389 STDOUT terraform:  + address = (known after apply) 2025-02-04 08:42:11.389583 | orchestrator | 08:42:11.389 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.389610 | orchestrator | 08:42:11.389 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-04 08:42:11.389642 | orchestrator | 08:42:11.389 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.389671 | orchestrator | 08:42:11.389 STDOUT terraform:  + fixed_ip = (known after apply) 2025-02-04 08:42:11.389701 | orchestrator | 08:42:11.389 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.389723 | orchestrator | 08:42:11.389 STDOUT terraform:  + pool = "public" 2025-02-04 08:42:11.389756 | orchestrator | 08:42:11.389 STDOUT terraform:  + port_id = (known after apply) 2025-02-04 08:42:11.389785 | orchestrator | 08:42:11.389 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.389812 | orchestrator | 08:42:11.389 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.389853 | orchestrator | 08:42:11.389 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.389861 | orchestrator | 08:42:11.389 STDOUT terraform:  } 2025-02-04 08:42:11.389913 | orchestrator | 08:42:11.389 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-02-04 08:42:11.389964 | orchestrator | 08:42:11.389 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-02-04 08:42:11.390008 | orchestrator | 08:42:11.389 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.390068 | orchestrator | 08:42:11.390 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.390091 | orchestrator | 08:42:11.390 STDOUT terraform:  + availability_zone_hints = [ 2025-02-04 08:42:11.390110 | orchestrator | 08:42:11.390 STDOUT terraform:  + "nova", 2025-02-04 08:42:11.390126 | orchestrator | 08:42:11.390 STDOUT terraform:  ] 2025-02-04 08:42:11.390169 | orchestrator | 08:42:11.390 STDOUT terraform:  + dns_domain = (known after apply) 2025-02-04 08:42:11.390213 | orchestrator | 08:42:11.390 STDOUT terraform:  + external = (known after apply) 2025-02-04 08:42:11.390258 | orchestrator | 08:42:11.390 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.390302 | orchestrator | 08:42:11.390 STDOUT terraform:  + mtu = (known after apply) 2025-02-04 08:42:11.390382 | orchestrator | 08:42:11.390 STDOUT terraform:  + name = "net-testbed-management" 2025-02-04 08:42:11.390450 | orchestrator | 08:42:11.390 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.390459 | orchestrator | 08:42:11.390 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.390495 | orchestrator | 08:42:11.390 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.390538 | orchestrator | 08:42:11.390 STDOUT terraform:  + shared = (known after apply) 2025-02-04 08:42:11.390583 | orchestrator | 08:42:11.390 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.390626 | orchestrator | 08:42:11.390 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-02-04 08:42:11.390652 | orchestrator | 08:42:11.390 STDOUT terraform:  + segments (known after apply) 2025-02-04 08:42:11.390668 | orchestrator | 08:42:11.390 STDOUT terraform:  } 2025-02-04 08:42:11.390724 | orchestrator | 08:42:11.390 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-02-04 08:42:11.390780 | orchestrator | 08:42:11.390 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-02-04 08:42:11.390822 | orchestrator | 08:42:11.390 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.390880 | orchestrator | 08:42:11.390 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.390935 | orchestrator | 08:42:11.390 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.391004 | orchestrator | 08:42:11.390 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.391046 | orchestrator | 08:42:11.390 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.391089 | orchestrator | 08:42:11.391 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.391131 | orchestrator | 08:42:11.391 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.391174 | orchestrator | 08:42:11.391 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.391217 | orchestrator | 08:42:11.391 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.391261 | orchestrator | 08:42:11.391 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.391308 | orchestrator | 08:42:11.391 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.391343 | orchestrator | 08:42:11.391 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.391435 | orchestrator | 08:42:11.391 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.393041 | orchestrator | 08:42:11.391 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.393172 | orchestrator | 08:42:11.391 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.393196 | orchestrator | 08:42:11.391 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.393201 | orchestrator | 08:42:11.391 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.393217 | orchestrator | 08:42:11.391 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.393224 | orchestrator | 08:42:11.391 STDOUT terraform:  } 2025-02-04 08:42:11.393229 | orchestrator | 08:42:11.391 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.393237 | orchestrator | 08:42:11.391 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.393254 | orchestrator | 08:42:11.393 STDOUT terraform:  } 2025-02-04 08:42:11.393260 | orchestrator | 08:42:11.393 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.393265 | orchestrator | 08:42:11.393 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.393271 | orchestrator | 08:42:11.393 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-02-04 08:42:11.393292 | orchestrator | 08:42:11.393 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.393299 | orchestrator | 08:42:11.393 STDOUT terraform:  } 2025-02-04 08:42:11.393306 | orchestrator | 08:42:11.393 STDOUT terraform:  } 2025-02-04 08:42:11.393382 | orchestrator | 08:42:11.393 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-02-04 08:42:11.393441 | orchestrator | 08:42:11.393 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-04 08:42:11.393481 | orchestrator | 08:42:11.393 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.393534 | orchestrator | 08:42:11.393 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.393593 | orchestrator | 08:42:11.393 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.393645 | orchestrator | 08:42:11.393 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.393689 | orchestrator | 08:42:11.393 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.393732 | orchestrator | 08:42:11.393 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.393774 | orchestrator | 08:42:11.393 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.393819 | orchestrator | 08:42:11.393 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.393897 | orchestrator | 08:42:11.393 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.393923 | orchestrator | 08:42:11.393 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.393968 | orchestrator | 08:42:11.393 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.394026 | orchestrator | 08:42:11.393 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.394084 | orchestrator | 08:42:11.394 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.394129 | orchestrator | 08:42:11.394 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.394212 | orchestrator | 08:42:11.394 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.394234 | orchestrator | 08:42:11.394 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.394242 | orchestrator | 08:42:11.394 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.394268 | orchestrator | 08:42:11.394 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.394276 | orchestrator | 08:42:11.394 STDOUT terraform:  } 2025-02-04 08:42:11.394298 | orchestrator | 08:42:11.394 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.394330 | orchestrator | 08:42:11.394 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-04 08:42:11.394338 | orchestrator | 08:42:11.394 STDOUT terraform:  } 2025-02-04 08:42:11.394364 | orchestrator | 08:42:11.394 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.394397 | orchestrator | 08:42:11.394 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.394404 | orchestrator | 08:42:11.394 STDOUT terraform:  } 2025-02-04 08:42:11.394427 | orchestrator | 08:42:11.394 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.394462 | orchestrator | 08:42:11.394 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-04 08:42:11.394469 | orchestrator | 08:42:11.394 STDOUT terraform:  } 2025-02-04 08:42:11.394500 | orchestrator | 08:42:11.394 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.394507 | orchestrator | 08:42:11.394 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.394541 | orchestrator | 08:42:11.394 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-02-04 08:42:11.394573 | orchestrator | 08:42:11.394 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.394581 | orchestrator | 08:42:11.394 STDOUT terraform:  } 2025-02-04 08:42:11.394596 | orchestrator | 08:42:11.394 STDOUT terraform:  } 2025-02-04 08:42:11.394653 | orchestrator | 08:42:11.394 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-02-04 08:42:11.394708 | orchestrator | 08:42:11.394 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-04 08:42:11.394749 | orchestrator | 08:42:11.394 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.394788 | orchestrator | 08:42:11.394 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.394828 | orchestrator | 08:42:11.394 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.394900 | orchestrator | 08:42:11.394 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.394935 | orchestrator | 08:42:11.394 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.394978 | orchestrator | 08:42:11.394 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.395018 | orchestrator | 08:42:11.394 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.395064 | orchestrator | 08:42:11.395 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.395101 | orchestrator | 08:42:11.395 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.395142 | orchestrator | 08:42:11.395 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.395185 | orchestrator | 08:42:11.395 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.395225 | orchestrator | 08:42:11.395 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.395266 | orchestrator | 08:42:11.395 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.395309 | orchestrator | 08:42:11.395 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.395351 | orchestrator | 08:42:11.395 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.395394 | orchestrator | 08:42:11.395 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.395416 | orchestrator | 08:42:11.395 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.395450 | orchestrator | 08:42:11.395 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.395469 | orchestrator | 08:42:11.395 STDOUT terraform:  } 2025-02-04 08:42:11.395492 | orchestrator | 08:42:11.395 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.395526 | orchestrator | 08:42:11.395 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-04 08:42:11.395540 | orchestrator | 08:42:11.395 STDOUT terraform:  } 2025-02-04 08:42:11.395562 | orchestrator | 08:42:11.395 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.395594 | orchestrator | 08:42:11.395 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.395601 | orchestrator | 08:42:11.395 STDOUT terraform:  } 2025-02-04 08:42:11.395626 | orchestrator | 08:42:11.395 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.395659 | orchestrator | 08:42:11.395 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-04 08:42:11.395675 | orchestrator | 08:42:11.395 STDOUT terraform:  } 2025-02-04 08:42:11.395702 | orchestrator | 08:42:11.395 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.395717 | orchestrator | 08:42:11.395 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.395744 | orchestrator | 08:42:11.395 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-02-04 08:42:11.395779 | orchestrator | 08:42:11.395 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.395786 | orchestrator | 08:42:11.395 STDOUT terraform:  } 2025-02-04 08:42:11.395802 | orchestrator | 08:42:11.395 STDOUT terraform:  } 2025-02-04 08:42:11.395868 | orchestrator | 08:42:11.395 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-02-04 08:42:11.395921 | orchestrator | 08:42:11.395 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-04 08:42:11.395964 | orchestrator | 08:42:11.395 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.396006 | orchestrator | 08:42:11.395 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.396047 | orchestrator | 08:42:11.396 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.396090 | orchestrator | 08:42:11.396 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.396133 | orchestrator | 08:42:11.396 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.396175 | orchestrator | 08:42:11.396 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.396216 | orchestrator | 08:42:11.396 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.396259 | orchestrator | 08:42:11.396 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.396304 | orchestrator | 08:42:11.396 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.396344 | orchestrator | 08:42:11.396 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.396386 | orchestrator | 08:42:11.396 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.396428 | orchestrator | 08:42:11.396 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.396474 | orchestrator | 08:42:11.396 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.396520 | orchestrator | 08:42:11.396 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.396555 | orchestrator | 08:42:11.396 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.396598 | orchestrator | 08:42:11.396 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.396620 | orchestrator | 08:42:11.396 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.396657 | orchestrator | 08:42:11.396 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.396664 | orchestrator | 08:42:11.396 STDOUT terraform:  } 2025-02-04 08:42:11.396689 | orchestrator | 08:42:11.396 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.396728 | orchestrator | 08:42:11.396 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-04 08:42:11.396735 | orchestrator | 08:42:11.396 STDOUT terraform:  } 2025-02-04 08:42:11.396759 | orchestrator | 08:42:11.396 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.396792 | orchestrator | 08:42:11.396 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.396800 | orchestrator | 08:42:11.396 STDOUT terraform:  } 2025-02-04 08:42:11.396826 | orchestrator | 08:42:11.396 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.396876 | orchestrator | 08:42:11.396 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-04 08:42:11.396884 | orchestrator | 08:42:11.396 STDOUT terraform:  } 2025-02-04 08:42:11.396909 | orchestrator | 08:42:11.396 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.396916 | orchestrator | 08:42:11.396 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.396950 | orchestrator | 08:42:11.396 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-02-04 08:42:11.396980 | orchestrator | 08:42:11.396 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.396987 | orchestrator | 08:42:11.396 STDOUT terraform:  } 2025-02-04 08:42:11.397006 | orchestrator | 08:42:11.396 STDOUT terraform:  } 2025-02-04 08:42:11.397060 | orchestrator | 08:42:11.397 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-02-04 08:42:11.397114 | orchestrator | 08:42:11.397 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-04 08:42:11.397157 | orchestrator | 08:42:11.397 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.397199 | orchestrator | 08:42:11.397 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.397242 | orchestrator | 08:42:11.397 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.397286 | orchestrator | 08:42:11.397 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.397330 | orchestrator | 08:42:11.397 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.397370 | orchestrator | 08:42:11.397 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.397412 | orchestrator | 08:42:11.397 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.397455 | orchestrator | 08:42:11.397 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.397499 | orchestrator | 08:42:11.397 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.397540 | orchestrator | 08:42:11.397 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.397583 | orchestrator | 08:42:11.397 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.397625 | orchestrator | 08:42:11.397 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.397668 | orchestrator | 08:42:11.397 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.397710 | orchestrator | 08:42:11.397 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.397751 | orchestrator | 08:42:11.397 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.397796 | orchestrator | 08:42:11.397 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.397835 | orchestrator | 08:42:11.397 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.397893 | orchestrator | 08:42:11.397 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.397908 | orchestrator | 08:42:11.397 STDOUT terraform:  } 2025-02-04 08:42:11.397932 | orchestrator | 08:42:11.397 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.397966 | orchestrator | 08:42:11.397 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-04 08:42:11.397974 | orchestrator | 08:42:11.397 STDOUT terraform:  } 2025-02-04 08:42:11.397999 | orchestrator | 08:42:11.397 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.398056 | orchestrator | 08:42:11.397 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.398064 | orchestrator | 08:42:11.398 STDOUT terraform:  } 2025-02-04 08:42:11.398087 | orchestrator | 08:42:11.398 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.398122 | orchestrator | 08:42:11.398 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-04 08:42:11.398138 | orchestrator | 08:42:11.398 STDOUT terraform:  } 2025-02-04 08:42:11.398165 | orchestrator | 08:42:11.398 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.398181 | orchestrator | 08:42:11.398 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.398209 | orchestrator | 08:42:11.398 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-02-04 08:42:11.398244 | orchestrator | 08:42:11.398 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.398251 | orchestrator | 08:42:11.398 STDOUT terraform:  } 2025-02-04 08:42:11.398263 | orchestrator | 08:42:11.398 STDOUT terraform:  } 2025-02-04 08:42:11.398321 | orchestrator | 08:42:11.398 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-02-04 08:42:11.398375 | orchestrator | 08:42:11.398 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-04 08:42:11.398418 | orchestrator | 08:42:11.398 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.398459 | orchestrator | 08:42:11.398 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.398501 | orchestrator | 08:42:11.398 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.398544 | orchestrator | 08:42:11.398 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.398586 | orchestrator | 08:42:11.398 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.398629 | orchestrator | 08:42:11.398 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.398671 | orchestrator | 08:42:11.398 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.398717 | orchestrator | 08:42:11.398 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.398757 | orchestrator | 08:42:11.398 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.398798 | orchestrator | 08:42:11.398 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.398852 | orchestrator | 08:42:11.398 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.398891 | orchestrator | 08:42:11.398 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.398933 | orchestrator | 08:42:11.398 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.398976 | orchestrator | 08:42:11.398 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.399018 | orchestrator | 08:42:11.398 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.399060 | orchestrator | 08:42:11.399 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.399082 | orchestrator | 08:42:11.399 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.399114 | orchestrator | 08:42:11.399 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.399125 | orchestrator | 08:42:11.399 STDOUT terraform:  } 2025-02-04 08:42:11.399146 | orchestrator | 08:42:11.399 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.399179 | orchestrator | 08:42:11.399 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-04 08:42:11.399187 | orchestrator | 08:42:11.399 STDOUT terraform:  } 2025-02-04 08:42:11.399211 | orchestrator | 08:42:11.399 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.399248 | orchestrator | 08:42:11.399 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.399254 | orchestrator | 08:42:11.399 STDOUT terraform:  } 2025-02-04 08:42:11.399277 | orchestrator | 08:42:11.399 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.399312 | orchestrator | 08:42:11.399 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-04 08:42:11.399323 | orchestrator | 08:42:11.399 STDOUT terraform:  } 2025-02-04 08:42:11.399345 | orchestrator | 08:42:11.399 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.399360 | orchestrator | 08:42:11.399 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.399389 | orchestrator | 08:42:11.399 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-02-04 08:42:11.399423 | orchestrator | 08:42:11.399 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.399430 | orchestrator | 08:42:11.399 STDOUT terraform:  } 2025-02-04 08:42:11.399446 | orchestrator | 08:42:11.399 STDOUT terraform:  } 2025-02-04 08:42:11.399501 | orchestrator | 08:42:11.399 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-02-04 08:42:11.399562 | orchestrator | 08:42:11.399 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-02-04 08:42:11.399605 | orchestrator | 08:42:11.399 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.399649 | orchestrator | 08:42:11.399 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-02-04 08:42:11.399690 | orchestrator | 08:42:11.399 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-02-04 08:42:11.399732 | orchestrator | 08:42:11.399 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.399774 | orchestrator | 08:42:11.399 STDOUT terraform:  + device_id = (known after apply) 2025-02-04 08:42:11.399816 | orchestrator | 08:42:11.399 STDOUT terraform:  + device_owner = (known after apply) 2025-02-04 08:42:11.399870 | orchestrator | 08:42:11.399 STDOUT terraform:  + dns_assignment = (known after apply) 2025-02-04 08:42:11.399913 | orchestrator | 08:42:11.399 STDOUT terraform:  + dns_name = (known after apply) 2025-02-04 08:42:11.399956 | orchestrator | 08:42:11.399 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.400002 | orchestrator | 08:42:11.399 STDOUT terraform:  + mac_address = (known after apply) 2025-02-04 08:42:11.400044 | orchestrator | 08:42:11.399 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.400085 | orchestrator | 08:42:11.400 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-02-04 08:42:11.400138 | orchestrator | 08:42:11.400 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-02-04 08:42:11.400170 | orchestrator | 08:42:11.400 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.400211 | orchestrator | 08:42:11.400 STDOUT terraform:  + security_group_ids = (known after apply) 2025-02-04 08:42:11.400253 | orchestrator | 08:42:11.400 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.400276 | orchestrator | 08:42:11.400 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.400309 | orchestrator | 08:42:11.400 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-02-04 08:42:11.400318 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.400343 | orchestrator | 08:42:11.400 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.400377 | orchestrator | 08:42:11.400 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-02-04 08:42:11.400390 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.400409 | orchestrator | 08:42:11.400 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.400443 | orchestrator | 08:42:11.400 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-02-04 08:42:11.400458 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.400482 | orchestrator | 08:42:11.400 STDOUT terraform:  + allowed_address_pairs { 2025-02-04 08:42:11.400515 | orchestrator | 08:42:11.400 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-02-04 08:42:11.400522 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.400551 | orchestrator | 08:42:11.400 STDOUT terraform:  + binding (known after apply) 2025-02-04 08:42:11.400566 | orchestrator | 08:42:11.400 STDOUT terraform:  + fixed_ip { 2025-02-04 08:42:11.400596 | orchestrator | 08:42:11.400 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-02-04 08:42:11.400632 | orchestrator | 08:42:11.400 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.400639 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.400654 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.400710 | orchestrator | 08:42:11.400 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-02-04 08:42:11.400781 | orchestrator | 08:42:11.400 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-02-04 08:42:11.400802 | orchestrator | 08:42:11.400 STDOUT terraform:  + force_destroy = false 2025-02-04 08:42:11.400867 | orchestrator | 08:42:11.400 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.400875 | orchestrator | 08:42:11.400 STDOUT terraform:  + port_id = (known after apply) 2025-02-04 08:42:11.400909 | orchestrator | 08:42:11.400 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.400942 | orchestrator | 08:42:11.400 STDOUT terraform:  + router_id = (known after apply) 2025-02-04 08:42:11.400977 | orchestrator | 08:42:11.400 STDOUT terraform:  + subnet_id = (known after apply) 2025-02-04 08:42:11.400984 | orchestrator | 08:42:11.400 STDOUT terraform:  } 2025-02-04 08:42:11.401028 | orchestrator | 08:42:11.400 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-02-04 08:42:11.401072 | orchestrator | 08:42:11.401 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-02-04 08:42:11.401114 | orchestrator | 08:42:11.401 STDOUT terraform:  + admin_state_up = (known after apply) 2025-02-04 08:42:11.401158 | orchestrator | 08:42:11.401 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.401183 | orchestrator | 08:42:11.401 STDOUT terraform:  + availability_zone_hints = [ 2025-02-04 08:42:11.401199 | orchestrator | 08:42:11.401 STDOUT terraform:  + "nova", 2025-02-04 08:42:11.401213 | orchestrator | 08:42:11.401 STDOUT terraform:  ] 2025-02-04 08:42:11.401258 | orchestrator | 08:42:11.401 STDOUT terraform:  + distributed = (known after apply) 2025-02-04 08:42:11.401303 | orchestrator | 08:42:11.401 STDOUT terraform:  + enable_snat = (known after apply) 2025-02-04 08:42:11.401362 | orchestrator | 08:42:11.401 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-02-04 08:42:11.401406 | orchestrator | 08:42:11.401 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.401440 | orchestrator | 08:42:11.401 STDOUT terraform:  + name = "testbed" 2025-02-04 08:42:11.401483 | orchestrator | 08:42:11.401 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.401526 | orchestrator | 08:42:11.401 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.401561 | orchestrator | 08:42:11.401 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-02-04 08:42:11.401568 | orchestrator | 08:42:11.401 STDOUT terraform:  } 2025-02-04 08:42:11.401635 | orchestrator | 08:42:11.401 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-02-04 08:42:11.401697 | orchestrator | 08:42:11.401 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-02-04 08:42:11.401720 | orchestrator | 08:42:11.401 STDOUT terraform:  + description = "ssh" 2025-02-04 08:42:11.401748 | orchestrator | 08:42:11.401 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.401771 | orchestrator | 08:42:11.401 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.401808 | orchestrator | 08:42:11.401 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.401830 | orchestrator | 08:42:11.401 STDOUT terraform:  + port_range_max = 22 2025-02-04 08:42:11.401874 | orchestrator | 08:42:11.401 STDOUT terraform:  + port_range_min = 22 2025-02-04 08:42:11.401897 | orchestrator | 08:42:11.401 STDOUT terraform:  + protocol = "tcp" 2025-02-04 08:42:11.401931 | orchestrator | 08:42:11.401 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.401966 | orchestrator | 08:42:11.401 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.401994 | orchestrator | 08:42:11.401 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.402061 | orchestrator | 08:42:11.401 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.402098 | orchestrator | 08:42:11.402 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.402106 | orchestrator | 08:42:11.402 STDOUT terraform:  } 2025-02-04 08:42:11.402171 | orchestrator | 08:42:11.402 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-02-04 08:42:11.402235 | orchestrator | 08:42:11.402 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-02-04 08:42:11.402263 | orchestrator | 08:42:11.402 STDOUT terraform:  + description = "wireguard" 2025-02-04 08:42:11.402291 | orchestrator | 08:42:11.402 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.402315 | orchestrator | 08:42:11.402 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.402350 | orchestrator | 08:42:11.402 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.402373 | orchestrator | 08:42:11.402 STDOUT terraform:  + port_range_max = 51820 2025-02-04 08:42:11.402396 | orchestrator | 08:42:11.402 STDOUT terraform:  + port_range_min = 51820 2025-02-04 08:42:11.402420 | orchestrator | 08:42:11.402 STDOUT terraform:  + protocol = "udp" 2025-02-04 08:42:11.402457 | orchestrator | 08:42:11.402 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.402493 | orchestrator | 08:42:11.402 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.402522 | orchestrator | 08:42:11.402 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.402556 | orchestrator | 08:42:11.402 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.402592 | orchestrator | 08:42:11.402 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.402599 | orchestrator | 08:42:11.402 STDOUT terraform:  } 2025-02-04 08:42:11.402666 | orchestrator | 08:42:11.402 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-02-04 08:42:11.402730 | orchestrator | 08:42:11.402 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-02-04 08:42:11.402758 | orchestrator | 08:42:11.402 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.402781 | orchestrator | 08:42:11.402 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.402817 | orchestrator | 08:42:11.402 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.402853 | orchestrator | 08:42:11.402 STDOUT terraform:  + protocol = "tcp" 2025-02-04 08:42:11.402885 | orchestrator | 08:42:11.402 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.402919 | orchestrator | 08:42:11.402 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.402953 | orchestrator | 08:42:11.402 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-04 08:42:11.402989 | orchestrator | 08:42:11.402 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.403026 | orchestrator | 08:42:11.402 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.403034 | orchestrator | 08:42:11.403 STDOUT terraform:  } 2025-02-04 08:42:11.403101 | orchestrator | 08:42:11.403 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-02-04 08:42:11.403165 | orchestrator | 08:42:11.403 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-02-04 08:42:11.403192 | orchestrator | 08:42:11.403 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.403215 | orchestrator | 08:42:11.403 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.403251 | orchestrator | 08:42:11.403 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.403274 | orchestrator | 08:42:11.403 STDOUT terraform:  + protocol = "udp" 2025-02-04 08:42:11.403309 | orchestrator | 08:42:11.403 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.403344 | orchestrator | 08:42:11.403 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.403413 | orchestrator | 08:42:11.403 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-02-04 08:42:11.403443 | orchestrator | 08:42:11.403 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.403457 | orchestrator | 08:42:11.403 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.403515 | orchestrator | 08:42:11.403 STDOUT terraform:  } 2025-02-04 08:42:11.403523 | orchestrator | 08:42:11.403 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-02-04 08:42:11.403580 | orchestrator | 08:42:11.403 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-02-04 08:42:11.403607 | orchestrator | 08:42:11.403 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.403629 | orchestrator | 08:42:11.403 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.403666 | orchestrator | 08:42:11.403 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.403688 | orchestrator | 08:42:11.403 STDOUT terraform:  + protocol = "icmp" 2025-02-04 08:42:11.403723 | orchestrator | 08:42:11.403 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.403776 | orchestrator | 08:42:11.403 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.403814 | orchestrator | 08:42:11.403 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.403822 | orchestrator | 08:42:11.403 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.403873 | orchestrator | 08:42:11.403 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.403933 | orchestrator | 08:42:11.403 STDOUT terraform:  } 2025-02-04 08:42:11.403941 | orchestrator | 08:42:11.403 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-02-04 08:42:11.403996 | orchestrator | 08:42:11.403 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-02-04 08:42:11.404023 | orchestrator | 08:42:11.403 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.404045 | orchestrator | 08:42:11.404 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.404082 | orchestrator | 08:42:11.404 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.404104 | orchestrator | 08:42:11.404 STDOUT terraform:  + protocol = "tcp" 2025-02-04 08:42:11.404140 | orchestrator | 08:42:11.404 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.404174 | orchestrator | 08:42:11.404 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.404201 | orchestrator | 08:42:11.404 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.404237 | orchestrator | 08:42:11.404 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.404278 | orchestrator | 08:42:11.404 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.404285 | orchestrator | 08:42:11.404 STDOUT terraform:  } 2025-02-04 08:42:11.404344 | orchestrator | 08:42:11.404 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-02-04 08:42:11.404404 | orchestrator | 08:42:11.404 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-02-04 08:42:11.404431 | orchestrator | 08:42:11.404 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.404456 | orchestrator | 08:42:11.404 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.404492 | orchestrator | 08:42:11.404 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.404515 | orchestrator | 08:42:11.404 STDOUT terraform:  + protocol = "udp" 2025-02-04 08:42:11.404550 | orchestrator | 08:42:11.404 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.404585 | orchestrator | 08:42:11.404 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.404613 | orchestrator | 08:42:11.404 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.404649 | orchestrator | 08:42:11.404 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.404686 | orchestrator | 08:42:11.404 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.404694 | orchestrator | 08:42:11.404 STDOUT terraform:  } 2025-02-04 08:42:11.404757 | orchestrator | 08:42:11.404 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-02-04 08:42:11.404818 | orchestrator | 08:42:11.404 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-02-04 08:42:11.404888 | orchestrator | 08:42:11.404 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.404908 | orchestrator | 08:42:11.404 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.404915 | orchestrator | 08:42:11.404 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.404933 | orchestrator | 08:42:11.404 STDOUT terraform:  + protocol = "icmp" 2025-02-04 08:42:11.404970 | orchestrator | 08:42:11.404 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.405008 | orchestrator | 08:42:11.404 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.405035 | orchestrator | 08:42:11.405 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.405071 | orchestrator | 08:42:11.405 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.405105 | orchestrator | 08:42:11.405 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.405113 | orchestrator | 08:42:11.405 STDOUT terraform:  } 2025-02-04 08:42:11.405176 | orchestrator | 08:42:11.405 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-02-04 08:42:11.405235 | orchestrator | 08:42:11.405 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-02-04 08:42:11.405260 | orchestrator | 08:42:11.405 STDOUT terraform:  + description = "vrrp" 2025-02-04 08:42:11.405286 | orchestrator | 08:42:11.405 STDOUT terraform:  + direction = "ingress" 2025-02-04 08:42:11.405309 | orchestrator | 08:42:11.405 STDOUT terraform:  + ethertype = "IPv4" 2025-02-04 08:42:11.405345 | orchestrator | 08:42:11.405 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.405368 | orchestrator | 08:42:11.405 STDOUT terraform:  + protocol = "112" 2025-02-04 08:42:11.405404 | orchestrator | 08:42:11.405 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.405443 | orchestrator | 08:42:11.405 STDOUT terraform:  + remote_group_id = (known after apply) 2025-02-04 08:42:11.405466 | orchestrator | 08:42:11.405 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-02-04 08:42:11.405500 | orchestrator | 08:42:11.405 STDOUT terraform:  + security_group_id = (known after apply) 2025-02-04 08:42:11.405536 | orchestrator | 08:42:11.405 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.405543 | orchestrator | 08:42:11.405 STDOUT terraform:  } 2025-02-04 08:42:11.405602 | orchestrator | 08:42:11.405 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-02-04 08:42:11.405660 | orchestrator | 08:42:11.405 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-02-04 08:42:11.405692 | orchestrator | 08:42:11.405 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.405731 | orchestrator | 08:42:11.405 STDOUT terraform:  + description = "management security group" 2025-02-04 08:42:11.405766 | orchestrator | 08:42:11.405 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.405797 | orchestrator | 08:42:11.405 STDOUT terraform:  + name = "testbed-management" 2025-02-04 08:42:11.405830 | orchestrator | 08:42:11.405 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.405880 | orchestrator | 08:42:11.405 STDOUT terraform:  + stateful = (known after apply) 2025-02-04 08:42:11.405910 | orchestrator | 08:42:11.405 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.405917 | orchestrator | 08:42:11.405 STDOUT terraform:  } 2025-02-04 08:42:11.405977 | orchestrator | 08:42:11.405 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-02-04 08:42:11.406044 | orchestrator | 08:42:11.405 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-02-04 08:42:11.406075 | orchestrator | 08:42:11.406 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.406108 | orchestrator | 08:42:11.406 STDOUT terraform:  + description = "node security group" 2025-02-04 08:42:11.406141 | orchestrator | 08:42:11.406 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.406169 | orchestrator | 08:42:11.406 STDOUT terraform:  + name = "testbed-node" 2025-02-04 08:42:11.406201 | orchestrator | 08:42:11.406 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.406233 | orchestrator | 08:42:11.406 STDOUT terraform:  + stateful = (known after apply) 2025-02-04 08:42:11.406265 | orchestrator | 08:42:11.406 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.406279 | orchestrator | 08:42:11.406 STDOUT terraform:  } 2025-02-04 08:42:11.406332 | orchestrator | 08:42:11.406 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-02-04 08:42:11.406385 | orchestrator | 08:42:11.406 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-02-04 08:42:11.406425 | orchestrator | 08:42:11.406 STDOUT terraform:  + all_tags = (known after apply) 2025-02-04 08:42:11.406459 | orchestrator | 08:42:11.406 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-02-04 08:42:11.406479 | orchestrator | 08:42:11.406 STDOUT terraform:  + dns_nameservers = [ 2025-02-04 08:42:11.406493 | orchestrator | 08:42:11.406 STDOUT terraform:  + "8.8.8.8", 2025-02-04 08:42:11.406510 | orchestrator | 08:42:11.406 STDOUT terraform:  + "9.9.9.9", 2025-02-04 08:42:11.406518 | orchestrator | 08:42:11.406 STDOUT terraform:  ] 2025-02-04 08:42:11.406541 | orchestrator | 08:42:11.406 STDOUT terraform:  + enable_dhcp = true 2025-02-04 08:42:11.406575 | orchestrator | 08:42:11.406 STDOUT terraform:  + gateway_ip = (known after apply) 2025-02-04 08:42:11.406611 | orchestrator | 08:42:11.406 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.406633 | orchestrator | 08:42:11.406 STDOUT terraform:  + ip_version = 4 2025-02-04 08:42:11.406668 | orchestrator | 08:42:11.406 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-02-04 08:42:11.406703 | orchestrator | 08:42:11.406 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-02-04 08:42:11.406746 | orchestrator | 08:42:11.406 STDOUT terraform:  + name = "subnet-testbed-management" 2025-02-04 08:42:11.406782 | orchestrator | 08:42:11.406 STDOUT terraform:  + network_id = (known after apply) 2025-02-04 08:42:11.406805 | orchestrator | 08:42:11.406 STDOUT terraform:  + no_gateway = false 2025-02-04 08:42:11.406853 | orchestrator | 08:42:11.406 STDOUT terraform:  + region = (known after apply) 2025-02-04 08:42:11.406886 | orchestrator | 08:42:11.406 STDOUT terraform:  + service_types = (known after apply) 2025-02-04 08:42:11.406923 | orchestrator | 08:42:11.406 STDOUT terraform:  + tenant_id = (known after apply) 2025-02-04 08:42:11.406943 | orchestrator | 08:42:11.406 STDOUT terraform:  + allocation_pool { 2025-02-04 08:42:11.406970 | orchestrator | 08:42:11.406 STDOUT terraform:  + end = "192.168.31.250" 2025-02-04 08:42:11.406997 | orchestrator | 08:42:11.406 STDOUT terraform:  + start = "192.168.31.200" 2025-02-04 08:42:11.407004 | orchestrator | 08:42:11.406 STDOUT terraform:  } 2025-02-04 08:42:11.407021 | orchestrator | 08:42:11.407 STDOUT terraform:  } 2025-02-04 08:42:11.407049 | orchestrator | 08:42:11.407 STDOUT terraform:  # terraform_data.image will be created 2025-02-04 08:42:11.407075 | orchestrator | 08:42:11.407 STDOUT terraform:  + resource "terraform_data" "image" { 2025-02-04 08:42:11.407103 | orchestrator | 08:42:11.407 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.407124 | orchestrator | 08:42:11.407 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-04 08:42:11.407151 | orchestrator | 08:42:11.407 STDOUT terraform:  + output = (known after apply) 2025-02-04 08:42:11.407159 | orchestrator | 08:42:11.407 STDOUT terraform:  } 2025-02-04 08:42:11.407193 | orchestrator | 08:42:11.407 STDOUT terraform:  # terraform_data.image_node will be created 2025-02-04 08:42:11.407227 | orchestrator | 08:42:11.407 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-02-04 08:42:11.407254 | orchestrator | 08:42:11.407 STDOUT terraform:  + id = (known after apply) 2025-02-04 08:42:11.407276 | orchestrator | 08:42:11.407 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-02-04 08:42:11.407302 | orchestrator | 08:42:11.407 STDOUT terraform:  + output = (known after apply) 2025-02-04 08:42:11.407318 | orchestrator | 08:42:11.407 STDOUT terraform:  } 2025-02-04 08:42:11.407351 | orchestrator | 08:42:11.407 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-02-04 08:42:11.407366 | orchestrator | 08:42:11.407 STDOUT terraform: Changes to Outputs: 2025-02-04 08:42:11.407393 | orchestrator | 08:42:11.407 STDOUT terraform:  + manager_address = (sensitive value) 2025-02-04 08:42:11.407420 | orchestrator | 08:42:11.407 STDOUT terraform:  + private_key = (sensitive value) 2025-02-04 08:42:11.477574 | orchestrator | 08:42:11.477 STDOUT terraform: terraform_data.image_node: Creating... 2025-02-04 08:42:11.477675 | orchestrator | 08:42:11.477 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=2c44e60a-c4a3-c602-bf06-ae1b312d338f] 2025-02-04 08:42:11.602992 | orchestrator | 08:42:11.602 STDOUT terraform: terraform_data.image: Creating... 2025-02-04 08:42:11.603516 | orchestrator | 08:42:11.603 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=129404b9-1e99-119f-28b8-ea7c0502b4b3] 2025-02-04 08:42:11.616215 | orchestrator | 08:42:11.616 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-02-04 08:42:11.616442 | orchestrator | 08:42:11.616 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-02-04 08:42:11.619725 | orchestrator | 08:42:11.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-02-04 08:42:11.625910 | orchestrator | 08:42:11.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-02-04 08:42:11.626621 | orchestrator | 08:42:11.626 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-02-04 08:42:11.626911 | orchestrator | 08:42:11.626 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-02-04 08:42:11.628409 | orchestrator | 08:42:11.628 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-02-04 08:42:11.629479 | orchestrator | 08:42:11.629 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-02-04 08:42:11.631053 | orchestrator | 08:42:11.630 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-02-04 08:42:11.631718 | orchestrator | 08:42:11.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-02-04 08:42:12.099464 | orchestrator | 08:42:12.099 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-04 08:42:12.108343 | orchestrator | 08:42:12.108 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-02-04 08:42:12.363086 | orchestrator | 08:42:12.362 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-02-04 08:42:12.370791 | orchestrator | 08:42:12.370 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-02-04 08:42:12.410263 | orchestrator | 08:42:12.409 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-02-04 08:42:12.417490 | orchestrator | 08:42:12.417 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-02-04 08:42:17.648891 | orchestrator | 08:42:17.648 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=0f5c069b-c54c-4331-a293-52372b69da4b] 2025-02-04 08:42:17.657016 | orchestrator | 08:42:17.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-02-04 08:42:21.621283 | orchestrator | 08:42:21.620 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-02-04 08:42:21.630676 | orchestrator | 08:42:21.630 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-02-04 08:42:21.631822 | orchestrator | 08:42:21.631 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-02-04 08:42:21.632253 | orchestrator | 08:42:21.632 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-02-04 08:42:21.633948 | orchestrator | 08:42:21.633 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-02-04 08:42:21.635101 | orchestrator | 08:42:21.634 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-02-04 08:42:22.109150 | orchestrator | 08:42:22.108 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-02-04 08:42:22.222967 | orchestrator | 08:42:22.222 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=212bd4e9-d9e4-4fb6-aa1a-c75a1354e796] 2025-02-04 08:42:22.227144 | orchestrator | 08:42:22.226 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-02-04 08:42:22.246296 | orchestrator | 08:42:22.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=4d0f6c82-70d7-420b-af8d-33a5666fb869] 2025-02-04 08:42:22.255259 | orchestrator | 08:42:22.255 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-02-04 08:42:22.276554 | orchestrator | 08:42:22.276 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=901ca6f0-6e57-48c8-bbd3-08a878599b73] 2025-02-04 08:42:22.281963 | orchestrator | 08:42:22.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=77d1cf45-53d9-435f-b362-8711a42fa03b] 2025-02-04 08:42:22.285326 | orchestrator | 08:42:22.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-02-04 08:42:22.287673 | orchestrator | 08:42:22.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-02-04 08:42:22.300557 | orchestrator | 08:42:22.300 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=84ce1e5e-93f1-4e17-9c74-c98d07335b49] 2025-02-04 08:42:22.304984 | orchestrator | 08:42:22.304 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-02-04 08:42:22.316417 | orchestrator | 08:42:22.316 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=38b972c8-2def-4835-a52d-e389734565af] 2025-02-04 08:42:22.328273 | orchestrator | 08:42:22.328 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-02-04 08:42:22.371781 | orchestrator | 08:42:22.371 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-02-04 08:42:22.376310 | orchestrator | 08:42:22.375 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=42292afd-cea6-4e4a-b321-0bc7a4bab513] 2025-02-04 08:42:22.384095 | orchestrator | 08:42:22.383 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-02-04 08:42:22.419085 | orchestrator | 08:42:22.418 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-02-04 08:42:22.553408 | orchestrator | 08:42:22.552 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=6f1478d2-b213-4f65-abc0-539a0d8b61fa] 2025-02-04 08:42:22.564921 | orchestrator | 08:42:22.564 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-02-04 08:42:22.600138 | orchestrator | 08:42:22.599 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 11s [id=d26fda4b-4cd5-4c78-8c80-a561505edb1a] 2025-02-04 08:42:22.613641 | orchestrator | 08:42:22.613 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-02-04 08:42:27.662139 | orchestrator | 08:42:27.661 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-02-04 08:42:27.828313 | orchestrator | 08:42:27.827 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=5ef04da4-33c0-4c31-8f35-70c17ff294fe] 2025-02-04 08:42:27.840283 | orchestrator | 08:42:27.840 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-02-04 08:42:27.850117 | orchestrator | 08:42:27.849 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=5ce485ba4d323faec029024badf494dfb45951e9] 2025-02-04 08:42:27.857245 | orchestrator | 08:42:27.856 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-02-04 08:42:27.864665 | orchestrator | 08:42:27.864 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=eb5b7f4dba81ded803c6699ba3577eaca1402285] 2025-02-04 08:42:27.877698 | orchestrator | 08:42:27.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-02-04 08:42:32.228729 | orchestrator | 08:42:32.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-02-04 08:42:32.257255 | orchestrator | 08:42:32.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-02-04 08:42:32.286015 | orchestrator | 08:42:32.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-02-04 08:42:32.288792 | orchestrator | 08:42:32.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-02-04 08:42:32.306294 | orchestrator | 08:42:32.305 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-02-04 08:42:32.329607 | orchestrator | 08:42:32.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-02-04 08:42:32.385166 | orchestrator | 08:42:32.384 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-02-04 08:42:32.424920 | orchestrator | 08:42:32.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=843e4ddb-3456-4bc2-9151-43109d21e883] 2025-02-04 08:42:32.566270 | orchestrator | 08:42:32.565 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-02-04 08:42:32.615024 | orchestrator | 08:42:32.614 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-02-04 08:42:32.812429 | orchestrator | 08:42:32.811 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=17d2e1d1-143d-4df8-b794-06a49264520c] 2025-02-04 08:42:32.813303 | orchestrator | 08:42:32.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=3639d977-d811-449d-b930-d83a01ae7e68] 2025-02-04 08:42:32.813950 | orchestrator | 08:42:32.813 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=81f63dc5-7b43-4c99-9b7b-2b520b540dae] 2025-02-04 08:42:32.814355 | orchestrator | 08:42:32.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=263b1482-edb0-40a4-b8be-a0e8e90b1cea] 2025-02-04 08:42:32.819041 | orchestrator | 08:42:32.818 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 11s [id=2e725b5a-39a0-4c9f-add8-ff554d181543] 2025-02-04 08:42:32.828230 | orchestrator | 08:42:32.821 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=d5e896df-3760-43bc-823d-dd864c8452e8] 2025-02-04 08:42:32.841065 | orchestrator | 08:42:32.828 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=c8a0131d-fae0-46a9-a275-20bf3d241b40] 2025-02-04 08:42:32.841119 | orchestrator | 08:42:32.840 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-02-04 08:42:32.847877 | orchestrator | 08:42:32.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-02-04 08:42:32.848755 | orchestrator | 08:42:32.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-02-04 08:42:32.854267 | orchestrator | 08:42:32.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-02-04 08:42:32.865024 | orchestrator | 08:42:32.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-02-04 08:42:32.865072 | orchestrator | 08:42:32.864 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-02-04 08:42:33.000812 | orchestrator | 08:42:33.000 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=fbba87c4-ddc1-4dbb-882a-8042673fa097] 2025-02-04 08:42:37.879277 | orchestrator | 08:42:37.878 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-02-04 08:42:38.215122 | orchestrator | 08:42:38.214 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=213e25e0-25b4-4110-96e2-3b7485daf5ef] 2025-02-04 08:42:38.627656 | orchestrator | 08:42:38.627 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=62641d85-0ee2-4679-ae68-0c9fac808ab4] 2025-02-04 08:42:38.634909 | orchestrator | 08:42:38.634 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-02-04 08:42:42.851697 | orchestrator | 08:42:42.851 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-02-04 08:42:42.855853 | orchestrator | 08:42:42.855 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-02-04 08:42:42.855929 | orchestrator | 08:42:42.855 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-02-04 08:42:42.857928 | orchestrator | 08:42:42.857 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-02-04 08:42:42.858290 | orchestrator | 08:42:42.858 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-02-04 08:42:43.239135 | orchestrator | 08:42:43.238 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=34756b4f-e35d-475a-95c3-a17bc4378557] 2025-02-04 08:42:43.260365 | orchestrator | 08:42:43.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=047150d6-6f6c-4019-b80e-4be9a7d65c24] 2025-02-04 08:42:43.283624 | orchestrator | 08:42:43.283 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=a424fac1-723d-4e26-82ac-15e9ac8e6afc] 2025-02-04 08:42:43.297248 | orchestrator | 08:42:43.296 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=65dbfb35-c088-49c9-9717-b00e675ef863] 2025-02-04 08:42:43.316147 | orchestrator | 08:42:43.315 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=08b5b955-2a19-469b-a49b-98bfe933a640] 2025-02-04 08:42:45.430083 | orchestrator | 08:42:45.429 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=3d5385e9-51e9-4cbe-bc1b-cfd8319b5a4f] 2025-02-04 08:42:45.436113 | orchestrator | 08:42:45.435 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-02-04 08:42:45.436338 | orchestrator | 08:42:45.436 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-02-04 08:42:45.438388 | orchestrator | 08:42:45.438 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-02-04 08:42:45.559366 | orchestrator | 08:42:45.559 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=884f2b51-066e-4cfc-b8ac-b6372bf96736] 2025-02-04 08:42:45.569444 | orchestrator | 08:42:45.568 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=6a908b45-1f9f-41b2-93cc-ed6d7f294db6] 2025-02-04 08:42:45.573808 | orchestrator | 08:42:45.573 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-02-04 08:42:45.574092 | orchestrator | 08:42:45.573 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-02-04 08:42:45.580253 | orchestrator | 08:42:45.580 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-02-04 08:42:45.581851 | orchestrator | 08:42:45.581 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-02-04 08:42:45.583650 | orchestrator | 08:42:45.583 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-02-04 08:42:45.583979 | orchestrator | 08:42:45.583 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-02-04 08:42:45.585911 | orchestrator | 08:42:45.585 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-02-04 08:42:45.587831 | orchestrator | 08:42:45.587 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-02-04 08:42:45.593889 | orchestrator | 08:42:45.593 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-02-04 08:42:45.736017 | orchestrator | 08:42:45.735 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=29d26f8c-f74d-47db-ac77-9f13c8eac8f3] 2025-02-04 08:42:45.746659 | orchestrator | 08:42:45.746 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-02-04 08:42:45.860577 | orchestrator | 08:42:45.860 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=afae5a1b-9d32-488f-9e89-640411e636ba] 2025-02-04 08:42:45.870229 | orchestrator | 08:42:45.869 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-02-04 08:42:46.169760 | orchestrator | 08:42:46.169 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=58c1c5b9-678d-4285-b6e3-3c2f9f4143f1] 2025-02-04 08:42:46.185456 | orchestrator | 08:42:46.185 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-02-04 08:42:46.334303 | orchestrator | 08:42:46.333 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=26e29e51-aa96-4bc2-a6df-7f7d16d20f03] 2025-02-04 08:42:46.348419 | orchestrator | 08:42:46.348 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-02-04 08:42:46.910927 | orchestrator | 08:42:46.910 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=18219f4e-4d81-4a49-a10e-8eff1832e6d6] 2025-02-04 08:42:46.928502 | orchestrator | 08:42:46.928 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-02-04 08:42:47.047124 | orchestrator | 08:42:47.046 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=fe975c25-d34e-424c-93d9-e8501a705dd8] 2025-02-04 08:42:47.053715 | orchestrator | 08:42:47.053 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-02-04 08:42:47.193333 | orchestrator | 08:42:47.192 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=a1d58b65-b241-4904-950d-aaf10ee6752c] 2025-02-04 08:42:47.199936 | orchestrator | 08:42:47.199 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-02-04 08:42:47.322850 | orchestrator | 08:42:47.322 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=61939867-4b13-4c3c-a6c4-ac81186665b1] 2025-02-04 08:42:47.407293 | orchestrator | 08:42:47.406 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=9fa5c077-d1de-458e-9935-8b212ce0d3b7] 2025-02-04 08:42:51.150392 | orchestrator | 08:42:51.149 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=bc740388-1657-43a4-b732-c932b6894a34] 2025-02-04 08:42:51.558775 | orchestrator | 08:42:51.558 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=497bdf92-437f-4ada-bcba-fbc7e0e08f96] 2025-02-04 08:42:51.682696 | orchestrator | 08:42:51.682 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=2f2c947f-242b-489f-b287-bfcae87a8bf6] 2025-02-04 08:42:51.886597 | orchestrator | 08:42:51.885 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=105cecd5-bfc9-4dd1-9aa5-560a60f67b4a] 2025-02-04 08:42:51.925376 | orchestrator | 08:42:51.924 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=7f8daea3-bc26-4f0a-be75-79a4d2492443] 2025-02-04 08:42:52.022328 | orchestrator | 08:42:52.021 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=196f9997-894f-4daf-8c85-e47953674213] 2025-02-04 08:42:52.066999 | orchestrator | 08:42:52.066 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=b2768508-7886-4173-9b63-2b9e8613d227] 2025-02-04 08:42:52.074444 | orchestrator | 08:42:52.074 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-02-04 08:42:52.436811 | orchestrator | 08:42:52.436 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=8735a19d-4ff1-477f-a163-14693db6c3ab] 2025-02-04 08:42:52.470578 | orchestrator | 08:42:52.470 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-02-04 08:42:52.472506 | orchestrator | 08:42:52.472 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-02-04 08:42:52.473458 | orchestrator | 08:42:52.473 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-02-04 08:42:52.485218 | orchestrator | 08:42:52.485 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-02-04 08:42:52.485322 | orchestrator | 08:42:52.485 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-02-04 08:42:52.489232 | orchestrator | 08:42:52.489 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-02-04 08:42:58.595259 | orchestrator | 08:42:58.594 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=0f29b9ac-faa6-44bf-8542-7a8ca2d1db4f] 2025-02-04 08:42:58.617282 | orchestrator | 08:42:58.617 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-02-04 08:42:58.618820 | orchestrator | 08:42:58.618 STDOUT terraform: local_file.inventory: Creating... 2025-02-04 08:42:58.623175 | orchestrator | 08:42:58.623 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-02-04 08:42:58.623610 | orchestrator | 08:42:58.623 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=bf037c32b1dce3fbe004cbb9f2af070cc0dded21] 2025-02-04 08:42:58.627127 | orchestrator | 08:42:58.626 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=3421ec511cad1dbccf992a281e35ebe6cc11aaed] 2025-02-04 08:42:59.147032 | orchestrator | 08:42:59.146 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=0f29b9ac-faa6-44bf-8542-7a8ca2d1db4f] 2025-02-04 08:43:02.475191 | orchestrator | 08:43:02.474 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-02-04 08:43:02.475310 | orchestrator | 08:43:02.475 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-02-04 08:43:02.477130 | orchestrator | 08:43:02.476 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-02-04 08:43:02.486692 | orchestrator | 08:43:02.486 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-02-04 08:43:02.486802 | orchestrator | 08:43:02.486 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-02-04 08:43:02.494855 | orchestrator | 08:43:02.494 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-02-04 08:43:12.476198 | orchestrator | 08:43:12.475 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-02-04 08:43:12.476383 | orchestrator | 08:43:12.476 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-02-04 08:43:12.477604 | orchestrator | 08:43:12.477 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-02-04 08:43:12.487159 | orchestrator | 08:43:12.486 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-02-04 08:43:12.487354 | orchestrator | 08:43:12.487 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-02-04 08:43:12.495524 | orchestrator | 08:43:12.495 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-02-04 08:43:12.848852 | orchestrator | 08:43:12.848 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=06e6210d-e311-4791-a5d1-61b7232d80be] 2025-02-04 08:43:12.947463 | orchestrator | 08:43:12.947 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=b6526c5c-9aff-4a76-aae3-03626f06edc4] 2025-02-04 08:43:12.996028 | orchestrator | 08:43:12.995 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=e4de56e2-c917-4203-97d6-449889805019] 2025-02-04 08:43:13.079221 | orchestrator | 08:43:13.078 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=ff66aac5-cb1d-4ac0-a00e-c9e63a7e017c] 2025-02-04 08:43:22.480740 | orchestrator | 08:43:22.480 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-02-04 08:43:23.197649 | orchestrator | 08:43:22.480 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-02-04 08:43:23.197800 | orchestrator | 08:43:23.197 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=3f62bf94-657a-492d-8fea-f738573ef883] 2025-02-04 08:43:23.238056 | orchestrator | 08:43:23.237 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=2f938e37-71d4-4b9c-ae0b-600e2081daff] 2025-02-04 08:43:23.250947 | orchestrator | 08:43:23.250 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-02-04 08:43:23.269420 | orchestrator | 08:43:23.269 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7703533951013623256] 2025-02-04 08:43:23.280764 | orchestrator | 08:43:23.280 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-02-04 08:43:23.282486 | orchestrator | 08:43:23.282 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-02-04 08:43:23.282694 | orchestrator | 08:43:23.282 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-02-04 08:43:23.283270 | orchestrator | 08:43:23.283 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-02-04 08:43:23.285501 | orchestrator | 08:43:23.285 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-02-04 08:43:23.285907 | orchestrator | 08:43:23.285 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-02-04 08:43:23.291551 | orchestrator | 08:43:23.291 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-02-04 08:43:23.304302 | orchestrator | 08:43:23.291 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-02-04 08:43:23.304405 | orchestrator | 08:43:23.304 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-02-04 08:43:23.310125 | orchestrator | 08:43:23.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-02-04 08:43:28.734872 | orchestrator | 08:43:28.734 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=06e6210d-e311-4791-a5d1-61b7232d80be/84ce1e5e-93f1-4e17-9c74-c98d07335b49] 2025-02-04 08:43:28.738158 | orchestrator | 08:43:28.737 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=2f938e37-71d4-4b9c-ae0b-600e2081daff/843e4ddb-3456-4bc2-9151-43109d21e883] 2025-02-04 08:43:28.754576 | orchestrator | 08:43:28.754 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-02-04 08:43:28.756337 | orchestrator | 08:43:28.756 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-02-04 08:43:28.765398 | orchestrator | 08:43:28.765 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=e4de56e2-c917-4203-97d6-449889805019/4d0f6c82-70d7-420b-af8d-33a5666fb869] 2025-02-04 08:43:28.772353 | orchestrator | 08:43:28.772 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 6s [id=ff66aac5-cb1d-4ac0-a00e-c9e63a7e017c/2e725b5a-39a0-4c9f-add8-ff554d181543] 2025-02-04 08:43:28.779351 | orchestrator | 08:43:28.779 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-02-04 08:43:28.782240 | orchestrator | 08:43:28.782 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-02-04 08:43:28.963667 | orchestrator | 08:43:28.963 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 6s [id=b6526c5c-9aff-4a76-aae3-03626f06edc4/5ef04da4-33c0-4c31-8f35-70c17ff294fe] 2025-02-04 08:43:28.983004 | orchestrator | 08:43:28.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=3f62bf94-657a-492d-8fea-f738573ef883/c8a0131d-fae0-46a9-a275-20bf3d241b40] 2025-02-04 08:43:28.983473 | orchestrator | 08:43:28.983 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-02-04 08:43:28.991876 | orchestrator | 08:43:28.991 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-02-04 08:43:31.972956 | orchestrator | 08:43:31.972 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 9s [id=06e6210d-e311-4791-a5d1-61b7232d80be/212bd4e9-d9e4-4fb6-aa1a-c75a1354e796] 2025-02-04 08:43:31.984752 | orchestrator | 08:43:31.984 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-02-04 08:43:32.009262 | orchestrator | 08:43:32.008 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=ff66aac5-cb1d-4ac0-a00e-c9e63a7e017c/6f1478d2-b213-4f65-abc0-539a0d8b61fa] 2025-02-04 08:43:32.026614 | orchestrator | 08:43:32.026 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-02-04 08:43:32.123720 | orchestrator | 08:43:32.123 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 9s [id=3f62bf94-657a-492d-8fea-f738573ef883/81f63dc5-7b43-4c99-9b7b-2b520b540dae] 2025-02-04 08:43:32.146253 | orchestrator | 08:43:32.145 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-02-04 08:43:32.151535 | orchestrator | 08:43:32.151 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 9s [id=b6526c5c-9aff-4a76-aae3-03626f06edc4/77d1cf45-53d9-435f-b362-8711a42fa03b] 2025-02-04 08:43:34.270372 | orchestrator | 08:43:34.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=e4de56e2-c917-4203-97d6-449889805019/263b1482-edb0-40a4-b8be-a0e8e90b1cea] 2025-02-04 08:43:34.285317 | orchestrator | 08:43:34.284 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=2f938e37-71d4-4b9c-ae0b-600e2081daff/901ca6f0-6e57-48c8-bbd3-08a878599b73] 2025-02-04 08:43:34.353517 | orchestrator | 08:43:34.352 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=2f938e37-71d4-4b9c-ae0b-600e2081daff/38b972c8-2def-4835-a52d-e389734565af] 2025-02-04 08:43:34.489877 | orchestrator | 08:43:34.489 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=06e6210d-e311-4791-a5d1-61b7232d80be/17d2e1d1-143d-4df8-b794-06a49264520c] 2025-02-04 08:43:35.172741 | orchestrator | 08:43:35.172 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=b6526c5c-9aff-4a76-aae3-03626f06edc4/3639d977-d811-449d-b930-d83a01ae7e68] 2025-02-04 08:43:35.174816 | orchestrator | 08:43:35.174 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=3f62bf94-657a-492d-8fea-f738573ef883/d5e896df-3760-43bc-823d-dd864c8452e8] 2025-02-04 08:43:37.306156 | orchestrator | 08:43:37.305 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=ff66aac5-cb1d-4ac0-a00e-c9e63a7e017c/d26fda4b-4cd5-4c78-8c80-a561505edb1a] 2025-02-04 08:43:37.367711 | orchestrator | 08:43:37.367 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=e4de56e2-c917-4203-97d6-449889805019/42292afd-cea6-4e4a-b321-0bc7a4bab513] 2025-02-04 08:43:42.148151 | orchestrator | 08:43:42.147 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-02-04 08:43:52.152025 | orchestrator | 08:43:52.151 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-02-04 08:43:52.804955 | orchestrator | 08:43:52.804 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=7999ae25-58a7-4c2e-90c2-3bef53289822] 2025-02-04 08:43:52.831834 | orchestrator | 08:43:52.831 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-02-04 08:43:52.831956 | orchestrator | 08:43:52.831 STDOUT terraform: Outputs: 2025-02-04 08:43:52.832026 | orchestrator | 08:43:52.831 STDOUT terraform: manager_address = 2025-02-04 08:43:52.832076 | orchestrator | 08:43:52.831 STDOUT terraform: private_key = 2025-02-04 08:43:53.096832 | orchestrator | changed 2025-02-04 08:43:53.131168 | 2025-02-04 08:43:53.131330 | TASK [Create infrastructure (stable)] 2025-02-04 08:43:53.267949 | orchestrator | skipping: Conditional result was False 2025-02-04 08:43:53.278332 | 2025-02-04 08:43:53.278470 | TASK [Fetch manager address] 2025-02-04 08:44:04.207423 | orchestrator | ok 2025-02-04 08:44:04.217825 | 2025-02-04 08:44:04.217952 | TASK [Set manager_host address] 2025-02-04 08:44:04.312712 | orchestrator | ok 2025-02-04 08:44:04.322291 | 2025-02-04 08:44:04.322409 | LOOP [Update ansible collections] 2025-02-04 08:44:05.224764 | orchestrator | changed 2025-02-04 08:44:06.122270 | orchestrator | changed 2025-02-04 08:44:06.143659 | 2025-02-04 08:44:06.143786 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-04 08:44:16.750277 | orchestrator | ok 2025-02-04 08:44:16.767974 | 2025-02-04 08:44:16.768115 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-04 08:45:16.820865 | orchestrator | ok 2025-02-04 08:45:16.832843 | 2025-02-04 08:45:16.832958 | TASK [Fetch manager ssh hostkey] 2025-02-04 08:45:17.928339 | orchestrator | Output suppressed because no_log was given 2025-02-04 08:45:17.941965 | 2025-02-04 08:45:17.942218 | TASK [Get ssh keypair from terraform environment] 2025-02-04 08:45:18.512207 | orchestrator | changed 2025-02-04 08:45:18.522890 | 2025-02-04 08:45:18.523027 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-04 08:45:18.575439 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-02-04 08:45:18.632136 | 2025-02-04 08:45:18.633255 | TASK [Run manager part 0] 2025-02-04 08:45:19.669833 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-04 08:45:19.714213 | orchestrator | 2025-02-04 08:45:21.741708 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-02-04 08:45:21.741811 | orchestrator | 2025-02-04 08:45:21.741944 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-02-04 08:45:21.742054 | orchestrator | ok: [testbed-manager] 2025-02-04 08:45:23.693532 | orchestrator | 2025-02-04 08:45:23.693607 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-04 08:45:23.693620 | orchestrator | 2025-02-04 08:45:23.693627 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 08:45:23.693640 | orchestrator | ok: [testbed-manager] 2025-02-04 08:45:24.343614 | orchestrator | 2025-02-04 08:45:24.343715 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-04 08:45:24.343748 | orchestrator | ok: [testbed-manager] 2025-02-04 08:45:24.385599 | orchestrator | 2025-02-04 08:45:24.385653 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-04 08:45:24.385669 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:24.414113 | orchestrator | 2025-02-04 08:45:24.414166 | orchestrator | TASK [Update package cache] **************************************************** 2025-02-04 08:45:24.414180 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:24.454123 | orchestrator | 2025-02-04 08:45:24.454168 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-04 08:45:24.454182 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:24.481736 | orchestrator | 2025-02-04 08:45:24.481790 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-04 08:45:24.481807 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:24.521663 | orchestrator | 2025-02-04 08:45:24.521709 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-04 08:45:24.521721 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:24.590408 | orchestrator | 2025-02-04 08:45:24.590490 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-02-04 08:45:24.590521 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:24.623032 | orchestrator | 2025-02-04 08:45:24.623089 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-02-04 08:45:24.623107 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:45:25.495441 | orchestrator | 2025-02-04 08:45:25.495493 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-02-04 08:45:25.495509 | orchestrator | changed: [testbed-manager] 2025-02-04 08:47:43.745267 | orchestrator | 2025-02-04 08:47:43.745365 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-02-04 08:47:43.745416 | orchestrator | changed: [testbed-manager] 2025-02-04 08:48:51.094794 | orchestrator | 2025-02-04 08:48:51.094994 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-04 08:48:51.095029 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:13.333387 | orchestrator | 2025-02-04 08:49:13.333447 | orchestrator | TASK [Install required packages] *********************************************** 2025-02-04 08:49:13.333462 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:22.376284 | orchestrator | 2025-02-04 08:49:22.376415 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-02-04 08:49:22.376463 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:22.426737 | orchestrator | 2025-02-04 08:49:22.426813 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-04 08:49:22.426851 | orchestrator | ok: [testbed-manager] 2025-02-04 08:49:23.277394 | orchestrator | 2025-02-04 08:49:23.277478 | orchestrator | TASK [Get current user] ******************************************************** 2025-02-04 08:49:23.277502 | orchestrator | ok: [testbed-manager] 2025-02-04 08:49:24.062802 | orchestrator | 2025-02-04 08:49:24.062908 | orchestrator | TASK [Create venv directory] *************************************************** 2025-02-04 08:49:24.062954 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:31.128049 | orchestrator | 2025-02-04 08:49:31.128110 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-02-04 08:49:31.128127 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:37.970672 | orchestrator | 2025-02-04 08:49:37.970837 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-02-04 08:49:37.970960 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:40.904490 | orchestrator | 2025-02-04 08:49:40.904543 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-02-04 08:49:40.904562 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:42.920107 | orchestrator | 2025-02-04 08:49:42.920155 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-02-04 08:49:42.920225 | orchestrator | changed: [testbed-manager] 2025-02-04 08:49:44.138914 | orchestrator | 2025-02-04 08:49:44.138958 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-02-04 08:49:44.138971 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-04 08:49:44.178483 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-04 08:49:44.178603 | orchestrator | 2025-02-04 08:49:44.178627 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-02-04 08:49:44.178640 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-04 08:50:11.636309 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-04 08:50:11.636440 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-04 08:50:11.636474 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-04 08:50:11.636517 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-02-04 08:50:12.229441 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-02-04 08:50:12.229529 | orchestrator | 2025-02-04 08:50:12.229549 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-02-04 08:50:12.229579 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:33.443152 | orchestrator | 2025-02-04 08:50:33.443293 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-02-04 08:50:33.443336 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-02-04 08:50:35.800979 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-02-04 08:50:35.801069 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-02-04 08:50:35.801086 | orchestrator | 2025-02-04 08:50:35.801102 | orchestrator | TASK [Install local collections] *********************************************** 2025-02-04 08:50:35.801130 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-02-04 08:50:37.240969 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-02-04 08:50:37.241073 | orchestrator | 2025-02-04 08:50:37.241090 | orchestrator | PLAY [Create operator user] **************************************************** 2025-02-04 08:50:37.241103 | orchestrator | 2025-02-04 08:50:37.241115 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 08:50:37.241141 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:37.286629 | orchestrator | 2025-02-04 08:50:37.286699 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-04 08:50:37.286718 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:37.349826 | orchestrator | 2025-02-04 08:50:37.349890 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-04 08:50:37.349907 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:38.142965 | orchestrator | 2025-02-04 08:50:38.143054 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-04 08:50:38.143080 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:38.929015 | orchestrator | 2025-02-04 08:50:38.929117 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-04 08:50:38.929150 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:40.344871 | orchestrator | 2025-02-04 08:50:40.344964 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-04 08:50:40.344996 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-02-04 08:50:41.716303 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-02-04 08:50:41.716342 | orchestrator | 2025-02-04 08:50:41.716348 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-04 08:50:41.716361 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:43.499410 | orchestrator | 2025-02-04 08:50:43.499459 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-04 08:50:43.499475 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 08:50:44.057610 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-02-04 08:50:44.057657 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-02-04 08:50:44.057667 | orchestrator | 2025-02-04 08:50:44.057676 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-04 08:50:44.057692 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:44.140126 | orchestrator | 2025-02-04 08:50:44.140181 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-04 08:50:44.140199 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:45.040138 | orchestrator | 2025-02-04 08:50:45.040188 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-04 08:50:45.040205 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:50:45.079748 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:45.079794 | orchestrator | 2025-02-04 08:50:45.079803 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-04 08:50:45.079819 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:45.124303 | orchestrator | 2025-02-04 08:50:45.124361 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-04 08:50:45.124382 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:45.159963 | orchestrator | 2025-02-04 08:50:45.160017 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-04 08:50:45.160034 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:45.220266 | orchestrator | 2025-02-04 08:50:45.220314 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-04 08:50:45.220330 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:45.970073 | orchestrator | 2025-02-04 08:50:45.970128 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-04 08:50:45.970149 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:47.395199 | orchestrator | 2025-02-04 08:50:47.395368 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-02-04 08:50:47.395388 | orchestrator | 2025-02-04 08:50:47.395403 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 08:50:47.395431 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:48.376664 | orchestrator | 2025-02-04 08:50:48.376708 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-02-04 08:50:48.376722 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:48.487148 | orchestrator | 2025-02-04 08:50:48.487219 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 08:50:48.487274 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-02-04 08:50:48.487403 | orchestrator | 2025-02-04 08:50:48.591416 | orchestrator | changed 2025-02-04 08:50:48.609742 | 2025-02-04 08:50:48.609872 | TASK [Point out that the log in on the manager is now possible] 2025-02-04 08:50:48.659340 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-02-04 08:50:48.670152 | 2025-02-04 08:50:48.670290 | TASK [Point out that the following task takes some time and does not give any output] 2025-02-04 08:50:48.715809 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-02-04 08:50:48.726370 | 2025-02-04 08:50:48.726495 | TASK [Run manager part 1 + 2] 2025-02-04 08:50:49.647345 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-02-04 08:50:49.709651 | orchestrator | 2025-02-04 08:50:52.781582 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-02-04 08:50:52.781828 | orchestrator | 2025-02-04 08:50:52.781883 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 08:50:52.781924 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:52.821075 | orchestrator | 2025-02-04 08:50:52.821151 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-02-04 08:50:52.821177 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:52.872449 | orchestrator | 2025-02-04 08:50:52.872637 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-02-04 08:50:52.872661 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:52.918772 | orchestrator | 2025-02-04 08:50:52.918838 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-04 08:50:52.918858 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:52.987190 | orchestrator | 2025-02-04 08:50:52.987308 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-04 08:50:52.987330 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:53.065609 | orchestrator | 2025-02-04 08:50:53.065665 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-04 08:50:53.065682 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:53.114833 | orchestrator | 2025-02-04 08:50:53.114885 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-04 08:50:53.114925 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-02-04 08:50:53.845040 | orchestrator | 2025-02-04 08:50:53.845254 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-04 08:50:53.845294 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:53.900052 | orchestrator | 2025-02-04 08:50:53.900146 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-04 08:50:53.900181 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:50:55.314171 | orchestrator | 2025-02-04 08:50:55.314249 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-04 08:50:55.314276 | orchestrator | changed: [testbed-manager] 2025-02-04 08:50:55.880318 | orchestrator | 2025-02-04 08:50:55.880362 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-04 08:50:55.880377 | orchestrator | ok: [testbed-manager] 2025-02-04 08:50:57.002089 | orchestrator | 2025-02-04 08:50:57.002138 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-04 08:50:57.002154 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:10.408022 | orchestrator | 2025-02-04 08:51:10.408075 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-04 08:51:10.408093 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:11.139589 | orchestrator | 2025-02-04 08:51:11.139673 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-02-04 08:51:11.139699 | orchestrator | ok: [testbed-manager] 2025-02-04 08:51:11.197714 | orchestrator | 2025-02-04 08:51:11.197788 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-02-04 08:51:11.197812 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:51:12.224380 | orchestrator | 2025-02-04 08:51:12.224493 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-02-04 08:51:12.224545 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:13.241751 | orchestrator | 2025-02-04 08:51:13.241871 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-02-04 08:51:13.241907 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:13.846276 | orchestrator | 2025-02-04 08:51:13.891264 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-02-04 08:51:13.891477 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:18.300999 | orchestrator | 2025-02-04 08:51:18.301117 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-02-04 08:51:18.301141 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-02-04 08:51:18.301189 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-02-04 08:51:18.301208 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-02-04 08:51:18.301223 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-02-04 08:51:18.301275 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:27.684284 | orchestrator | 2025-02-04 08:51:27.684378 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-02-04 08:51:27.684407 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-02-04 08:51:28.746155 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-02-04 08:51:28.746293 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-02-04 08:51:28.746314 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-02-04 08:51:28.746328 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-02-04 08:51:28.746341 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-02-04 08:51:28.746353 | orchestrator | 2025-02-04 08:51:28.746366 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-02-04 08:51:28.746407 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:28.786350 | orchestrator | 2025-02-04 08:51:28.786399 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-02-04 08:51:28.786414 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:51:32.052080 | orchestrator | 2025-02-04 08:51:32.052159 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-02-04 08:51:32.052192 | orchestrator | changed: [testbed-manager] 2025-02-04 08:51:32.089872 | orchestrator | 2025-02-04 08:51:32.089945 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-02-04 08:51:32.089972 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:53:15.912315 | orchestrator | 2025-02-04 08:53:15.912389 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-02-04 08:53:15.912404 | orchestrator | changed: [testbed-manager] 2025-02-04 08:53:17.113996 | orchestrator | 2025-02-04 08:53:17.114136 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-04 08:53:17.114188 | orchestrator | ok: [testbed-manager] 2025-02-04 08:53:17.218742 | orchestrator | 2025-02-04 08:53:17.218885 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 08:53:17.218920 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-02-04 08:53:17.218941 | orchestrator | 2025-02-04 08:53:17.372720 | orchestrator | changed 2025-02-04 08:53:17.380482 | 2025-02-04 08:53:17.380560 | TASK [Reboot manager] 2025-02-04 08:53:18.912027 | orchestrator | changed 2025-02-04 08:53:18.929648 | 2025-02-04 08:53:18.929778 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-02-04 08:53:33.678920 | orchestrator | ok 2025-02-04 08:53:33.695918 | 2025-02-04 08:53:33.696069 | TASK [Wait a little longer for the manager so that everything is ready] 2025-02-04 08:54:33.760104 | orchestrator | ok 2025-02-04 08:54:33.773054 | 2025-02-04 08:54:33.773190 | TASK [Deploy manager + bootstrap nodes] 2025-02-04 08:54:36.223277 | orchestrator | 2025-02-04 08:54:36.227138 | orchestrator | # DEPLOY MANAGER 2025-02-04 08:54:36.227160 | orchestrator | 2025-02-04 08:54:36.227167 | orchestrator | + set -e 2025-02-04 08:54:36.227189 | orchestrator | + echo 2025-02-04 08:54:36.227196 | orchestrator | + echo '# DEPLOY MANAGER' 2025-02-04 08:54:36.227203 | orchestrator | + echo 2025-02-04 08:54:36.227212 | orchestrator | + cat /opt/manager-vars.sh 2025-02-04 08:54:36.227227 | orchestrator | export NUMBER_OF_NODES=6 2025-02-04 08:54:36.227508 | orchestrator | 2025-02-04 08:54:36.227515 | orchestrator | export CEPH_VERSION=quincy 2025-02-04 08:54:36.227521 | orchestrator | export CONFIGURATION_VERSION=main 2025-02-04 08:54:36.227526 | orchestrator | export MANAGER_VERSION=latest 2025-02-04 08:54:36.227531 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-02-04 08:54:36.227536 | orchestrator | 2025-02-04 08:54:36.227541 | orchestrator | export ARA=false 2025-02-04 08:54:36.227546 | orchestrator | export TEMPEST=false 2025-02-04 08:54:36.227551 | orchestrator | export IS_ZUUL=true 2025-02-04 08:54:36.227556 | orchestrator | 2025-02-04 08:54:36.227561 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 08:54:36.227567 | orchestrator | export EXTERNAL_API=false 2025-02-04 08:54:36.227572 | orchestrator | 2025-02-04 08:54:36.227577 | orchestrator | export IMAGE_USER=ubuntu 2025-02-04 08:54:36.227582 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-02-04 08:54:36.227588 | orchestrator | 2025-02-04 08:54:36.227593 | orchestrator | export CEPH_STACK=ceph-ansible 2025-02-04 08:54:36.227600 | orchestrator | 2025-02-04 08:54:36.228429 | orchestrator | + echo 2025-02-04 08:54:36.228436 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-04 08:54:36.228444 | orchestrator | ++ export INTERACTIVE=false 2025-02-04 08:54:36.228585 | orchestrator | ++ INTERACTIVE=false 2025-02-04 08:54:36.228592 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-04 08:54:36.228601 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-04 08:54:36.228609 | orchestrator | + source /opt/manager-vars.sh 2025-02-04 08:54:36.228703 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-04 08:54:36.228710 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-04 08:54:36.228715 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-04 08:54:36.228720 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-04 08:54:36.228725 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-04 08:54:36.228730 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-04 08:54:36.228739 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-04 08:54:36.228744 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-04 08:54:36.228748 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-04 08:54:36.228755 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-04 08:54:36.228851 | orchestrator | ++ export ARA=false 2025-02-04 08:54:36.228858 | orchestrator | ++ ARA=false 2025-02-04 08:54:36.228863 | orchestrator | ++ export TEMPEST=false 2025-02-04 08:54:36.228868 | orchestrator | ++ TEMPEST=false 2025-02-04 08:54:36.228873 | orchestrator | ++ export IS_ZUUL=true 2025-02-04 08:54:36.228878 | orchestrator | ++ IS_ZUUL=true 2025-02-04 08:54:36.228883 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 08:54:36.228888 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 08:54:36.228897 | orchestrator | ++ export EXTERNAL_API=false 2025-02-04 08:54:36.228902 | orchestrator | ++ EXTERNAL_API=false 2025-02-04 08:54:36.228907 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-04 08:54:36.228912 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-04 08:54:36.228919 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-04 08:54:36.278913 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-04 08:54:36.279032 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-04 08:54:36.279052 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-04 08:54:36.279068 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-02-04 08:54:36.279110 | orchestrator | + docker version 2025-02-04 08:54:36.552079 | orchestrator | Client: Docker Engine - Community 2025-02-04 08:54:36.556987 | orchestrator | Version: 27.4.1 2025-02-04 08:54:36.557054 | orchestrator | API version: 1.47 2025-02-04 08:54:36.557071 | orchestrator | Go version: go1.22.10 2025-02-04 08:54:36.557085 | orchestrator | Git commit: b9d17ea 2025-02-04 08:54:36.557100 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-02-04 08:54:36.557116 | orchestrator | OS/Arch: linux/amd64 2025-02-04 08:54:36.557130 | orchestrator | Context: default 2025-02-04 08:54:36.557144 | orchestrator | 2025-02-04 08:54:36.557159 | orchestrator | Server: Docker Engine - Community 2025-02-04 08:54:36.557173 | orchestrator | Engine: 2025-02-04 08:54:36.557188 | orchestrator | Version: 27.4.1 2025-02-04 08:54:36.557202 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-02-04 08:54:36.557216 | orchestrator | Go version: go1.22.10 2025-02-04 08:54:36.557231 | orchestrator | Git commit: c710b88 2025-02-04 08:54:36.557276 | orchestrator | Built: Tue Dec 17 15:45:46 2024 2025-02-04 08:54:36.557292 | orchestrator | OS/Arch: linux/amd64 2025-02-04 08:54:36.557305 | orchestrator | Experimental: false 2025-02-04 08:54:36.557319 | orchestrator | containerd: 2025-02-04 08:54:36.557333 | orchestrator | Version: 1.7.25 2025-02-04 08:54:36.557347 | orchestrator | GitCommit: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb 2025-02-04 08:54:36.557362 | orchestrator | runc: 2025-02-04 08:54:36.557376 | orchestrator | Version: 1.2.4 2025-02-04 08:54:36.557391 | orchestrator | GitCommit: v1.2.4-0-g6c52b3f 2025-02-04 08:54:36.557434 | orchestrator | docker-init: 2025-02-04 08:54:36.557449 | orchestrator | Version: 0.19.0 2025-02-04 08:54:36.557463 | orchestrator | GitCommit: de40ad0 2025-02-04 08:54:36.557487 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-02-04 08:54:36.564669 | orchestrator | + set -e 2025-02-04 08:54:36.571064 | orchestrator | + source /opt/manager-vars.sh 2025-02-04 08:54:36.571133 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-04 08:54:36.571149 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-04 08:54:36.571162 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-04 08:54:36.571175 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-04 08:54:36.571189 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-04 08:54:36.571203 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-04 08:54:36.571215 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-04 08:54:36.571228 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-04 08:54:36.571240 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-04 08:54:36.571253 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-04 08:54:36.571266 | orchestrator | ++ export ARA=false 2025-02-04 08:54:36.571278 | orchestrator | ++ ARA=false 2025-02-04 08:54:36.571291 | orchestrator | ++ export TEMPEST=false 2025-02-04 08:54:36.571303 | orchestrator | ++ TEMPEST=false 2025-02-04 08:54:36.571316 | orchestrator | ++ export IS_ZUUL=true 2025-02-04 08:54:36.571328 | orchestrator | ++ IS_ZUUL=true 2025-02-04 08:54:36.571340 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 08:54:36.571353 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 08:54:36.571366 | orchestrator | ++ export EXTERNAL_API=false 2025-02-04 08:54:36.571439 | orchestrator | ++ EXTERNAL_API=false 2025-02-04 08:54:36.571461 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-04 08:54:36.571481 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-04 08:54:36.571507 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-04 08:54:36.571528 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-04 08:54:36.571549 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-04 08:54:36.571571 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-04 08:54:36.571592 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-04 08:54:36.571614 | orchestrator | ++ export INTERACTIVE=false 2025-02-04 08:54:36.571634 | orchestrator | ++ INTERACTIVE=false 2025-02-04 08:54:36.571647 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-04 08:54:36.571659 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-04 08:54:36.571672 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-02-04 08:54:36.571687 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 08:54:36.571700 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh quincy 2025-02-04 08:54:36.571725 | orchestrator | + set -e 2025-02-04 08:54:36.572525 | orchestrator | + VERSION=quincy 2025-02-04 08:54:36.572580 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-02-04 08:54:36.578445 | orchestrator | + [[ -n ceph_version: quincy ]] 2025-02-04 08:54:36.584626 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: quincy/g' /opt/configuration/environments/manager/configuration.yml 2025-02-04 08:54:36.584728 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.1 2025-02-04 08:54:36.591600 | orchestrator | + set -e 2025-02-04 08:54:36.594261 | orchestrator | + VERSION=2024.1 2025-02-04 08:54:36.594324 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-02-04 08:54:36.594366 | orchestrator | + [[ -n openstack_version: 2024.1 ]] 2025-02-04 08:54:36.596809 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.1/g' /opt/configuration/environments/manager/configuration.yml 2025-02-04 08:54:36.596856 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-02-04 08:54:36.597514 | orchestrator | ++ semver latest 7.0.0 2025-02-04 08:54:36.655838 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-04 08:54:36.695805 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 08:54:36.695899 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-02-04 08:54:36.695918 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-02-04 08:54:36.695974 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-04 08:54:36.698247 | orchestrator | + source /opt/venv/bin/activate 2025-02-04 08:54:36.699331 | orchestrator | ++ deactivate nondestructive 2025-02-04 08:54:36.699376 | orchestrator | ++ '[' -n '' ']' 2025-02-04 08:54:36.699433 | orchestrator | ++ '[' -n '' ']' 2025-02-04 08:54:36.699467 | orchestrator | ++ hash -r 2025-02-04 08:54:36.699499 | orchestrator | ++ '[' -n '' ']' 2025-02-04 08:54:36.699650 | orchestrator | ++ unset VIRTUAL_ENV 2025-02-04 08:54:36.699684 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-02-04 08:54:36.699709 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-02-04 08:54:36.699742 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-02-04 08:54:36.699784 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-02-04 08:54:36.699809 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-02-04 08:54:36.699834 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-02-04 08:54:36.699861 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-04 08:54:36.699885 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-04 08:54:36.699910 | orchestrator | ++ export PATH 2025-02-04 08:54:36.699940 | orchestrator | ++ '[' -n '' ']' 2025-02-04 08:54:36.699966 | orchestrator | ++ '[' -z '' ']' 2025-02-04 08:54:36.699989 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-02-04 08:54:36.700014 | orchestrator | ++ PS1='(venv) ' 2025-02-04 08:54:36.700039 | orchestrator | ++ export PS1 2025-02-04 08:54:36.700062 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-02-04 08:54:36.700077 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-02-04 08:54:36.700091 | orchestrator | ++ hash -r 2025-02-04 08:54:36.700109 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-02-04 08:54:38.037639 | orchestrator | 2025-02-04 08:54:38.618424 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-02-04 08:54:38.618541 | orchestrator | 2025-02-04 08:54:38.618561 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-04 08:54:38.618590 | orchestrator | ok: [testbed-manager] 2025-02-04 08:54:39.640343 | orchestrator | 2025-02-04 08:54:39.640531 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-04 08:54:39.640589 | orchestrator | changed: [testbed-manager] 2025-02-04 08:54:42.041959 | orchestrator | 2025-02-04 08:54:42.042174 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-02-04 08:54:42.042203 | orchestrator | 2025-02-04 08:54:42.042219 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 08:54:42.042252 | orchestrator | ok: [testbed-manager] 2025-02-04 08:54:47.535746 | orchestrator | 2025-02-04 08:54:47.535861 | orchestrator | TASK [Pull images] ************************************************************* 2025-02-04 08:54:47.535900 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ara-server:1.7.2) 2025-02-04 08:56:03.879034 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-02-04 08:56:03.879156 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/ceph-ansible:quincy) 2025-02-04 08:56:03.879177 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/inventory-reconciler:latest) 2025-02-04 08:56:03.879193 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/kolla-ansible:2024.1) 2025-02-04 08:56:03.879208 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.2-alpine) 2025-02-04 08:56:03.879224 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/netbox:v4.1.10) 2025-02-04 08:56:03.879239 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-ansible:latest) 2025-02-04 08:56:03.879254 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism:latest) 2025-02-04 08:56:03.879269 | orchestrator | changed: [testbed-manager] => (item=quay.io/osism/osism-netbox:latest) 2025-02-04 08:56:03.879284 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-02-04 08:56:03.879299 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.3.3) 2025-02-04 08:56:03.879314 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.4) 2025-02-04 08:56:03.879328 | orchestrator | 2025-02-04 08:56:03.879344 | orchestrator | TASK [Check status] ************************************************************ 2025-02-04 08:56:03.879428 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-04 08:56:03.879448 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-02-04 08:56:03.879522 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-02-04 08:56:03.879543 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j400883133845.1532', 'results_file': '/home/dragon/.ansible_async/j400883133845.1532', 'changed': True, 'item': 'quay.io/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879578 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j820627817030.1557', 'results_file': '/home/dragon/.ansible_async/j820627817030.1557', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879600 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-04 08:56:03.879616 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-02-04 08:56:03.879633 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j604424319543.1582', 'results_file': '/home/dragon/.ansible_async/j604424319543.1582', 'changed': True, 'item': 'quay.io/osism/ceph-ansible:quincy', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879651 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j561371526184.1614', 'results_file': '/home/dragon/.ansible_async/j561371526184.1614', 'changed': True, 'item': 'quay.io/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879668 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-04 08:56:03.879685 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j198553248142.1647', 'results_file': '/home/dragon/.ansible_async/j198553248142.1647', 'changed': True, 'item': 'quay.io/osism/kolla-ansible:2024.1', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879702 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j506617614285.1679', 'results_file': '/home/dragon/.ansible_async/j506617614285.1679', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.2-alpine', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879722 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j843102895677.1711', 'results_file': '/home/dragon/.ansible_async/j843102895677.1711', 'changed': True, 'item': 'quay.io/osism/netbox:v4.1.10', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-02-04 08:56:03.879757 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j294572576738.1744', 'results_file': '/home/dragon/.ansible_async/j294572576738.1744', 'changed': True, 'item': 'quay.io/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879775 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j785480633097.1776', 'results_file': '/home/dragon/.ansible_async/j785480633097.1776', 'changed': True, 'item': 'quay.io/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879792 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j911674374566.1808', 'results_file': '/home/dragon/.ansible_async/j911674374566.1808', 'changed': True, 'item': 'quay.io/osism/osism-netbox:latest', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879809 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j577131530163.1840', 'results_file': '/home/dragon/.ansible_async/j577131530163.1840', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879838 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j545764880323.1884', 'results_file': '/home/dragon/.ansible_async/j545764880323.1884', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.3.3', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.879865 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j532600102542.1916', 'results_file': '/home/dragon/.ansible_async/j532600102542.1916', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.4', 'ansible_loop_var': 'item'}) 2025-02-04 08:56:03.939975 | orchestrator | 2025-02-04 08:56:03.940073 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-02-04 08:56:03.940106 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:04.469896 | orchestrator | 2025-02-04 08:56:04.470012 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-02-04 08:56:04.470110 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:04.812306 | orchestrator | 2025-02-04 08:56:04.812422 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-02-04 08:56:04.812497 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:05.167131 | orchestrator | 2025-02-04 08:56:05.167246 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-02-04 08:56:05.167282 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:05.216307 | orchestrator | 2025-02-04 08:56:05.216395 | orchestrator | TASK [Do not use Nexus for Ceph on CentOS] ************************************* 2025-02-04 08:56:05.216414 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:56:05.273850 | orchestrator | 2025-02-04 08:56:05.273990 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-02-04 08:56:05.274099 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:56:05.621816 | orchestrator | 2025-02-04 08:56:05.621945 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-02-04 08:56:05.621984 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:05.783789 | orchestrator | 2025-02-04 08:56:05.783891 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-02-04 08:56:05.783920 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:56:07.684336 | orchestrator | 2025-02-04 08:56:07.684486 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-02-04 08:56:07.684520 | orchestrator | 2025-02-04 08:56:07.684537 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 08:56:07.684588 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:07.900220 | orchestrator | 2025-02-04 08:56:07.900358 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-02-04 08:56:07.900410 | orchestrator | 2025-02-04 08:56:08.002211 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-02-04 08:56:08.002336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-02-04 08:56:09.205381 | orchestrator | 2025-02-04 08:56:09.205515 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-02-04 08:56:09.205589 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-02-04 08:56:11.037894 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-02-04 08:56:11.038167 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-02-04 08:56:11.038190 | orchestrator | 2025-02-04 08:56:11.038207 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-02-04 08:56:11.038246 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-02-04 08:56:11.685817 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-02-04 08:56:11.685897 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-02-04 08:56:11.685908 | orchestrator | 2025-02-04 08:56:11.685916 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-02-04 08:56:11.685938 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:56:12.373169 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:12.373274 | orchestrator | 2025-02-04 08:56:12.373327 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-02-04 08:56:12.373357 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:56:12.460325 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:12.460418 | orchestrator | 2025-02-04 08:56:12.460435 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-02-04 08:56:12.460490 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:56:12.854200 | orchestrator | 2025-02-04 08:56:12.854297 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-02-04 08:56:12.854319 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:12.963052 | orchestrator | 2025-02-04 08:56:12.963162 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-02-04 08:56:12.963196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-02-04 08:56:14.063499 | orchestrator | 2025-02-04 08:56:14.063593 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-02-04 08:56:14.063619 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:14.887653 | orchestrator | 2025-02-04 08:56:14.887807 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-02-04 08:56:14.887847 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:18.092284 | orchestrator | 2025-02-04 08:56:18.092407 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-02-04 08:56:18.092445 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:18.419566 | orchestrator | 2025-02-04 08:56:18.419653 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-02-04 08:56:18.419677 | orchestrator | 2025-02-04 08:56:18.524268 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-02-04 08:56:18.524369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-02-04 08:56:21.058455 | orchestrator | 2025-02-04 08:56:21.058627 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-02-04 08:56:21.058676 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:21.229887 | orchestrator | 2025-02-04 08:56:21.230004 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-04 08:56:21.230114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-02-04 08:56:22.465177 | orchestrator | 2025-02-04 08:56:22.465290 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-02-04 08:56:22.465326 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-02-04 08:56:22.568439 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-02-04 08:56:22.568565 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-02-04 08:56:22.568583 | orchestrator | 2025-02-04 08:56:22.568599 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-02-04 08:56:22.568629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-02-04 08:56:23.232791 | orchestrator | 2025-02-04 08:56:23.232941 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-02-04 08:56:23.232978 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-02-04 08:56:23.898262 | orchestrator | 2025-02-04 08:56:23.898377 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-04 08:56:23.898414 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:56:24.352464 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:24.352605 | orchestrator | 2025-02-04 08:56:24.352625 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-02-04 08:56:24.352658 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:24.717879 | orchestrator | 2025-02-04 08:56:24.717995 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-02-04 08:56:24.718092 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:24.769317 | orchestrator | 2025-02-04 08:56:24.769430 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-02-04 08:56:24.769507 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:56:25.425938 | orchestrator | 2025-02-04 08:56:25.426115 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-02-04 08:56:25.426155 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:25.554844 | orchestrator | 2025-02-04 08:56:25.554961 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-02-04 08:56:25.554996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-02-04 08:56:26.342339 | orchestrator | 2025-02-04 08:56:26.342519 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-02-04 08:56:26.342572 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-02-04 08:56:27.025725 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-02-04 08:56:27.025870 | orchestrator | 2025-02-04 08:56:27.025898 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-02-04 08:56:27.025940 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-02-04 08:56:27.721965 | orchestrator | 2025-02-04 08:56:27.722128 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-02-04 08:56:27.722170 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:27.785890 | orchestrator | 2025-02-04 08:56:27.785981 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-02-04 08:56:27.786007 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:56:28.431720 | orchestrator | 2025-02-04 08:56:28.431826 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-02-04 08:56:28.431859 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:30.398628 | orchestrator | 2025-02-04 08:56:30.398742 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-02-04 08:56:30.398792 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:56:36.419111 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:56:36.419221 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 08:56:36.419237 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:36.419254 | orchestrator | 2025-02-04 08:56:36.419267 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-02-04 08:56:36.419294 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-02-04 08:56:37.046753 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-02-04 08:56:37.047614 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-02-04 08:56:37.047661 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-02-04 08:56:37.047685 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-02-04 08:56:37.047708 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-02-04 08:56:37.047730 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-02-04 08:56:37.047753 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-02-04 08:56:37.047775 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-02-04 08:56:37.047798 | orchestrator | changed: [testbed-manager] => (item=users) 2025-02-04 08:56:37.047822 | orchestrator | 2025-02-04 08:56:37.047848 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-02-04 08:56:37.047893 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-02-04 08:56:37.212441 | orchestrator | 2025-02-04 08:56:37.212571 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-02-04 08:56:37.212606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-02-04 08:56:37.949423 | orchestrator | 2025-02-04 08:56:37.949583 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-02-04 08:56:37.949623 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:38.635900 | orchestrator | 2025-02-04 08:56:38.636007 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-02-04 08:56:38.636040 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:39.373961 | orchestrator | 2025-02-04 08:56:39.374127 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-02-04 08:56:39.374165 | orchestrator | changed: [testbed-manager] 2025-02-04 08:56:41.655889 | orchestrator | 2025-02-04 08:56:41.655989 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-02-04 08:56:41.656013 | orchestrator | ok: [testbed-manager] 2025-02-04 08:56:42.628464 | orchestrator | 2025-02-04 08:56:42.628547 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-02-04 08:56:42.628563 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:04.842136 | orchestrator | 2025-02-04 08:57:04.842229 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-02-04 08:57:04.842247 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-02-04 08:57:04.929864 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:04.929938 | orchestrator | 2025-02-04 08:57:04.929946 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-02-04 08:57:04.929962 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:04.980894 | orchestrator | 2025-02-04 08:57:04.980981 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-02-04 08:57:04.980991 | orchestrator | 2025-02-04 08:57:04.980999 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-02-04 08:57:04.981019 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:05.064266 | orchestrator | 2025-02-04 08:57:05.064360 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-04 08:57:05.064383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-02-04 08:57:05.856976 | orchestrator | 2025-02-04 08:57:05.857115 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-02-04 08:57:05.857152 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:05.944840 | orchestrator | 2025-02-04 08:57:05.944937 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-02-04 08:57:05.944962 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:06.004146 | orchestrator | 2025-02-04 08:57:06.004233 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-02-04 08:57:06.004261 | orchestrator | ok: [testbed-manager] => { 2025-02-04 08:57:06.601031 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-02-04 08:57:06.601137 | orchestrator | } 2025-02-04 08:57:06.601154 | orchestrator | 2025-02-04 08:57:06.601168 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-02-04 08:57:06.601195 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:07.402931 | orchestrator | 2025-02-04 08:57:07.403040 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-02-04 08:57:07.403074 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:07.496373 | orchestrator | 2025-02-04 08:57:07.496482 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-02-04 08:57:07.496547 | orchestrator | ok: [testbed-manager] 2025-02-04 08:57:07.561912 | orchestrator | 2025-02-04 08:57:07.562009 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-02-04 08:57:07.562088 | orchestrator | ok: [testbed-manager] => { 2025-02-04 08:57:07.631693 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-02-04 08:57:07.631793 | orchestrator | } 2025-02-04 08:57:07.631821 | orchestrator | 2025-02-04 08:57:07.631848 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-02-04 08:57:07.631885 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:07.695408 | orchestrator | 2025-02-04 08:57:07.695545 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-02-04 08:57:07.695581 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:07.742866 | orchestrator | 2025-02-04 08:57:07.742934 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-02-04 08:57:07.742976 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:07.817407 | orchestrator | 2025-02-04 08:57:07.817608 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-02-04 08:57:07.817653 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:07.881630 | orchestrator | 2025-02-04 08:57:07.881734 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-02-04 08:57:07.881784 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:07.969823 | orchestrator | 2025-02-04 08:57:07.969920 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-02-04 08:57:07.969947 | orchestrator | skipping: [testbed-manager] 2025-02-04 08:57:09.382214 | orchestrator | 2025-02-04 08:57:09.382330 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-02-04 08:57:09.382364 | orchestrator | changed: [testbed-manager] 2025-02-04 08:57:09.504784 | orchestrator | 2025-02-04 08:57:09.504888 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-02-04 08:57:09.504921 | orchestrator | ok: [testbed-manager] 2025-02-04 08:58:09.568726 | orchestrator | 2025-02-04 08:58:09.568878 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-02-04 08:58:09.568918 | orchestrator | Pausing for 60 seconds 2025-02-04 08:58:09.660949 | orchestrator | changed: [testbed-manager] 2025-02-04 08:58:09.661067 | orchestrator | 2025-02-04 08:58:09.661086 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-02-04 08:58:09.661118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-02-04 09:01:18.668084 | orchestrator | 2025-02-04 09:01:18.668229 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-02-04 09:01:18.668282 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-02-04 09:01:20.599316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-02-04 09:01:20.599439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-02-04 09:01:20.599460 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-02-04 09:01:20.599476 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-02-04 09:01:20.599491 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-02-04 09:01:20.599506 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-02-04 09:01:20.599521 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-02-04 09:01:20.599536 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-02-04 09:01:20.599551 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-02-04 09:01:20.599566 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-02-04 09:01:20.599602 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-02-04 09:01:20.599618 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-02-04 09:01:20.599632 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-02-04 09:01:20.599647 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-02-04 09:01:20.599661 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-02-04 09:01:20.599676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-02-04 09:01:20.599743 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-02-04 09:01:20.599758 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:20.599773 | orchestrator | 2025-02-04 09:01:20.599790 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-02-04 09:01:20.599834 | orchestrator | 2025-02-04 09:01:20.599861 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 09:01:20.599896 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:20.701289 | orchestrator | 2025-02-04 09:01:20.701385 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-02-04 09:01:20.701418 | orchestrator | 2025-02-04 09:01:20.768870 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-02-04 09:01:20.768986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-02-04 09:01:22.605327 | orchestrator | 2025-02-04 09:01:22.605457 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-02-04 09:01:22.605494 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:22.665466 | orchestrator | 2025-02-04 09:01:22.665580 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-02-04 09:01:22.665615 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:22.771090 | orchestrator | 2025-02-04 09:01:22.771204 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-02-04 09:01:22.771236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-02-04 09:01:25.399076 | orchestrator | 2025-02-04 09:01:25.399213 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-02-04 09:01:25.399256 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-02-04 09:01:26.009814 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-02-04 09:01:26.009961 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-02-04 09:01:26.009985 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-02-04 09:01:26.009997 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-02-04 09:01:26.010010 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-02-04 09:01:26.010076 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-02-04 09:01:26.010089 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-02-04 09:01:26.010105 | orchestrator | 2025-02-04 09:01:26.010117 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-02-04 09:01:26.010147 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:26.102243 | orchestrator | 2025-02-04 09:01:26.102339 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-02-04 09:01:26.102368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-02-04 09:01:27.225797 | orchestrator | 2025-02-04 09:01:27.225896 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-02-04 09:01:27.225920 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-02-04 09:01:27.884418 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-02-04 09:01:27.884550 | orchestrator | 2025-02-04 09:01:27.884584 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-02-04 09:01:27.884627 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:27.954288 | orchestrator | 2025-02-04 09:01:27.954410 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-02-04 09:01:27.954449 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:01:28.013288 | orchestrator | 2025-02-04 09:01:28.013393 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-02-04 09:01:28.013426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-02-04 09:01:29.414315 | orchestrator | 2025-02-04 09:01:29.414476 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-02-04 09:01:29.414530 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 09:01:29.989489 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 09:01:29.989603 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:29.989620 | orchestrator | 2025-02-04 09:01:29.989634 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-02-04 09:01:29.989661 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:30.075152 | orchestrator | 2025-02-04 09:01:30.075265 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-02-04 09:01:30.075298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-02-04 09:01:30.627087 | orchestrator | 2025-02-04 09:01:30.627214 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-02-04 09:01:30.627251 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 09:01:31.212181 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:31.212338 | orchestrator | 2025-02-04 09:01:31.212358 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-02-04 09:01:31.212395 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:31.305502 | orchestrator | 2025-02-04 09:01:31.305657 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-02-04 09:01:31.305769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-02-04 09:01:31.851082 | orchestrator | 2025-02-04 09:01:31.851217 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-02-04 09:01:31.851250 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:32.240792 | orchestrator | 2025-02-04 09:01:32.240943 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-02-04 09:01:32.240985 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:33.420940 | orchestrator | 2025-02-04 09:01:33.421098 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-02-04 09:01:33.421140 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-02-04 09:01:34.071456 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-02-04 09:01:34.071636 | orchestrator | 2025-02-04 09:01:34.071717 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-02-04 09:01:34.071758 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:34.433502 | orchestrator | 2025-02-04 09:01:34.433624 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-02-04 09:01:34.433660 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:34.526870 | orchestrator | 2025-02-04 09:01:34.526986 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-02-04 09:01:34.527026 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:01:35.161903 | orchestrator | 2025-02-04 09:01:35.162088 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-02-04 09:01:35.162130 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:35.260732 | orchestrator | 2025-02-04 09:01:35.260845 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-02-04 09:01:35.260880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-02-04 09:01:35.313044 | orchestrator | 2025-02-04 09:01:35.313155 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-02-04 09:01:35.313190 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:37.377544 | orchestrator | 2025-02-04 09:01:37.377653 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-02-04 09:01:37.377716 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-02-04 09:01:38.105862 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-02-04 09:01:38.106090 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-02-04 09:01:38.106128 | orchestrator | 2025-02-04 09:01:38.106145 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-02-04 09:01:38.106181 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:38.178628 | orchestrator | 2025-02-04 09:01:38.178801 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-02-04 09:01:38.178837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-02-04 09:01:38.237454 | orchestrator | 2025-02-04 09:01:38.237570 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-02-04 09:01:38.237698 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:38.965010 | orchestrator | 2025-02-04 09:01:38.965133 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-02-04 09:01:38.965169 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-02-04 09:01:39.059875 | orchestrator | 2025-02-04 09:01:39.060092 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-02-04 09:01:39.060129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-02-04 09:01:39.764426 | orchestrator | 2025-02-04 09:01:39.764549 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-02-04 09:01:39.764582 | orchestrator | changed: [testbed-manager] 2025-02-04 09:01:40.381721 | orchestrator | 2025-02-04 09:01:40.381860 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-02-04 09:01:40.381915 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:40.433973 | orchestrator | 2025-02-04 09:01:40.434152 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-02-04 09:01:40.434190 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:01:40.502242 | orchestrator | 2025-02-04 09:01:40.502351 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-02-04 09:01:40.502382 | orchestrator | ok: [testbed-manager] 2025-02-04 09:01:41.353725 | orchestrator | 2025-02-04 09:01:41.353852 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-02-04 09:01:41.353888 | orchestrator | changed: [testbed-manager] 2025-02-04 09:02:06.221193 | orchestrator | 2025-02-04 09:02:06.221322 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-02-04 09:02:06.221357 | orchestrator | changed: [testbed-manager] 2025-02-04 09:02:06.874474 | orchestrator | 2025-02-04 09:02:06.874560 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-02-04 09:02:06.874580 | orchestrator | ok: [testbed-manager] 2025-02-04 09:02:11.345600 | orchestrator | 2025-02-04 09:02:11.345818 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-02-04 09:02:11.345863 | orchestrator | changed: [testbed-manager] 2025-02-04 09:02:11.392432 | orchestrator | 2025-02-04 09:02:11.392577 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-02-04 09:02:11.392629 | orchestrator | ok: [testbed-manager] 2025-02-04 09:02:11.461204 | orchestrator | 2025-02-04 09:02:11.461326 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-04 09:02:11.461345 | orchestrator | 2025-02-04 09:02:11.461361 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-02-04 09:02:11.461390 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:03:11.509503 | orchestrator | 2025-02-04 09:03:11.509684 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-02-04 09:03:11.509721 | orchestrator | Pausing for 60 seconds 2025-02-04 09:03:13.122271 | orchestrator | changed: [testbed-manager] 2025-02-04 09:03:13.122393 | orchestrator | 2025-02-04 09:03:13.122414 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-02-04 09:03:13.122449 | orchestrator | changed: [testbed-manager] 2025-02-04 09:03:34.264927 | orchestrator | 2025-02-04 09:03:34.265067 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-02-04 09:03:34.265105 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-02-04 09:03:39.357349 | orchestrator | changed: [testbed-manager] 2025-02-04 09:03:39.357468 | orchestrator | 2025-02-04 09:03:39.357486 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-02-04 09:03:39.357513 | orchestrator | changed: [testbed-manager] 2025-02-04 09:03:39.451196 | orchestrator | 2025-02-04 09:03:39.451311 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-02-04 09:03:39.451342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-02-04 09:03:39.526397 | orchestrator | 2025-02-04 09:03:39.526485 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-02-04 09:03:39.526492 | orchestrator | 2025-02-04 09:03:39.526516 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-02-04 09:03:39.526532 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:03:39.674704 | orchestrator | 2025-02-04 09:03:39.674837 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:03:39.674858 | orchestrator | testbed-manager : ok=103 changed=54 unreachable=0 failed=0 skipped=19 rescued=0 ignored=0 2025-02-04 09:03:39.674874 | orchestrator | 2025-02-04 09:03:39.674908 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-02-04 09:03:39.681645 | orchestrator | + deactivate 2025-02-04 09:03:39.681729 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-02-04 09:03:39.681748 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-02-04 09:03:39.681762 | orchestrator | + export PATH 2025-02-04 09:03:39.681777 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-02-04 09:03:39.681792 | orchestrator | + '[' -n '' ']' 2025-02-04 09:03:39.681806 | orchestrator | + hash -r 2025-02-04 09:03:39.681820 | orchestrator | + '[' -n '' ']' 2025-02-04 09:03:39.681834 | orchestrator | + unset VIRTUAL_ENV 2025-02-04 09:03:39.681848 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-02-04 09:03:39.681863 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-02-04 09:03:39.681877 | orchestrator | + unset -f deactivate 2025-02-04 09:03:39.681893 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-02-04 09:03:39.681921 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-04 09:03:39.682341 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-04 09:03:39.682369 | orchestrator | + local max_attempts=60 2025-02-04 09:03:39.682386 | orchestrator | + local name=ceph-ansible 2025-02-04 09:03:39.682402 | orchestrator | + local attempt_num=1 2025-02-04 09:03:39.682423 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-04 09:03:39.722950 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:03:39.723747 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-04 09:03:39.723793 | orchestrator | + local max_attempts=60 2025-02-04 09:03:39.723810 | orchestrator | + local name=kolla-ansible 2025-02-04 09:03:39.723826 | orchestrator | + local attempt_num=1 2025-02-04 09:03:39.723848 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-04 09:03:39.755663 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:03:39.756587 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-04 09:03:39.756628 | orchestrator | + local max_attempts=60 2025-02-04 09:03:39.756642 | orchestrator | + local name=osism-ansible 2025-02-04 09:03:39.756653 | orchestrator | + local attempt_num=1 2025-02-04 09:03:39.756673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-04 09:03:39.789044 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:03:40.544659 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-04 09:03:40.544761 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-04 09:03:40.544790 | orchestrator | ++ semver latest 8.0.0 2025-02-04 09:03:40.600033 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-04 09:03:40.601315 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 09:03:40.601336 | orchestrator | + wait_for_container_healthy 60 netbox-netbox-1 2025-02-04 09:03:40.601343 | orchestrator | + local max_attempts=60 2025-02-04 09:03:40.601349 | orchestrator | + local name=netbox-netbox-1 2025-02-04 09:03:40.601354 | orchestrator | + local attempt_num=1 2025-02-04 09:03:40.601372 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' netbox-netbox-1 2025-02-04 09:03:40.637407 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:03:40.646757 | orchestrator | + /opt/configuration/scripts/bootstrap/000-netbox.sh 2025-02-04 09:03:40.646854 | orchestrator | + set -e 2025-02-04 09:03:42.265560 | orchestrator | + osism netbox import 2025-02-04 09:03:42.265759 | orchestrator | 2025-02-04 09:03:42 | INFO  | Task 9d97a00f-9f82-4eb3-8260-a4142a1fa37a is running. Wait. No more output. 2025-02-04 09:03:46.006348 | orchestrator | + osism netbox init 2025-02-04 09:03:47.297633 | orchestrator | 2025-02-04 09:03:47 | INFO  | Task 62f8c35b-7028-427d-9920-85f42a858cf4 was prepared for execution. 2025-02-04 09:03:48.904056 | orchestrator | 2025-02-04 09:03:47 | INFO  | It takes a moment until task 62f8c35b-7028-427d-9920-85f42a858cf4 has been started and output is visible here. 2025-02-04 09:03:48.904226 | orchestrator | 2025-02-04 09:03:48.906069 | orchestrator | PLAY [Wait for netbox service] ************************************************* 2025-02-04 09:03:48.906740 | orchestrator | 2025-02-04 09:03:48.907499 | orchestrator | TASK [Wait for netbox service] ************************************************* 2025-02-04 09:03:49.797257 | orchestrator | [WARNING]: Platform linux on host localhost is using the discovered Python 2025-02-04 09:03:49.797729 | orchestrator | interpreter at /usr/local/bin/python3.13, but future installation of another 2025-02-04 09:03:49.798122 | orchestrator | Python interpreter could change the meaning of that path. See 2025-02-04 09:03:49.798733 | orchestrator | https://docs.ansible.com/ansible- 2025-02-04 09:03:49.798974 | orchestrator | core/2.18/reference_appendices/interpreter_discovery.html for more information. 2025-02-04 09:03:49.807158 | orchestrator | ok: [localhost] 2025-02-04 09:03:49.809187 | orchestrator | 2025-02-04 09:03:49.809240 | orchestrator | PLAY [Manage sites and locations] ********************************************** 2025-02-04 09:03:49.809797 | orchestrator | 2025-02-04 09:03:49.810230 | orchestrator | TASK [Manage Discworld site] *************************************************** 2025-02-04 09:03:51.063821 | orchestrator | changed: [localhost] 2025-02-04 09:03:51.064731 | orchestrator | 2025-02-04 09:03:52.464889 | orchestrator | TASK [Manage Ankh-Morpork location] ******************************************** 2025-02-04 09:03:52.465032 | orchestrator | changed: [localhost] 2025-02-04 09:03:52.465731 | orchestrator | 2025-02-04 09:03:52.466160 | orchestrator | PLAY [Manage IP prefixes] ****************************************************** 2025-02-04 09:03:52.467173 | orchestrator | 2025-02-04 09:03:52.469119 | orchestrator | TASK [Manage 192.168.16.0/20] ************************************************** 2025-02-04 09:03:53.856236 | orchestrator | changed: [localhost] 2025-02-04 09:03:53.856451 | orchestrator | 2025-02-04 09:03:53.856836 | orchestrator | TASK [Manage 192.168.112.0/20] ************************************************* 2025-02-04 09:03:55.066067 | orchestrator | changed: [localhost] 2025-02-04 09:03:55.066698 | orchestrator | 2025-02-04 09:03:55.066947 | orchestrator | PLAY [Manage IP addresses] ***************************************************** 2025-02-04 09:03:55.067240 | orchestrator | 2025-02-04 09:03:55.067685 | orchestrator | TASK [Manage api.testbed.osism.xyz IP address] ********************************* 2025-02-04 09:03:56.288036 | orchestrator | changed: [localhost] 2025-02-04 09:03:56.288360 | orchestrator | 2025-02-04 09:03:56.288409 | orchestrator | TASK [Manage api-int.testbed.osism.xyz IP address] ***************************** 2025-02-04 09:03:57.350435 | orchestrator | changed: [localhost] 2025-02-04 09:03:57.351085 | orchestrator | 2025-02-04 09:03:57.351143 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:03:57.351521 | orchestrator | 2025-02-04 09:03:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:03:57.351691 | orchestrator | 2025-02-04 09:03:57 | INFO  | Please wait and do not abort execution. 2025-02-04 09:03:57.352546 | orchestrator | localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:03:57.353385 | orchestrator | 2025-02-04 09:03:57.565378 | orchestrator | + osism netbox manage 1000 2025-02-04 09:03:58.832450 | orchestrator | 2025-02-04 09:03:58 | INFO  | Task bef123f8-95a3-4607-8a49-e2d433155164 was prepared for execution. 2025-02-04 09:04:00.509744 | orchestrator | 2025-02-04 09:03:58 | INFO  | It takes a moment until task bef123f8-95a3-4607-8a49-e2d433155164 has been started and output is visible here. 2025-02-04 09:04:00.509904 | orchestrator | 2025-02-04 09:04:00.510587 | orchestrator | PLAY [Manage rack 1000] ******************************************************** 2025-02-04 09:04:00.510618 | orchestrator | 2025-02-04 09:04:00.510640 | orchestrator | TASK [Manage rack 1000] ******************************************************** 2025-02-04 09:04:02.200120 | orchestrator | changed: [localhost] 2025-02-04 09:04:02.200751 | orchestrator | 2025-02-04 09:04:02.201339 | orchestrator | TASK [Manage testbed-switch-0] ************************************************* 2025-02-04 09:04:08.334847 | orchestrator | changed: [localhost] 2025-02-04 09:04:08.335466 | orchestrator | 2025-02-04 09:04:08.335495 | orchestrator | TASK [Manage testbed-switch-1] ************************************************* 2025-02-04 09:04:14.544826 | orchestrator | changed: [localhost] 2025-02-04 09:04:14.545118 | orchestrator | 2025-02-04 09:04:14.545153 | orchestrator | TASK [Manage testbed-switch-2] ************************************************* 2025-02-04 09:04:20.681068 | orchestrator | changed: [localhost] 2025-02-04 09:04:28.423137 | orchestrator | 2025-02-04 09:04:28.423280 | orchestrator | TASK [Manage testbed-manager] ************************************************** 2025-02-04 09:04:28.423319 | orchestrator | changed: [localhost] 2025-02-04 09:04:30.493705 | orchestrator | 2025-02-04 09:04:30.493829 | orchestrator | TASK [Manage testbed-node-0] *************************************************** 2025-02-04 09:04:30.493868 | orchestrator | changed: [localhost] 2025-02-04 09:04:33.160723 | orchestrator | 2025-02-04 09:04:33.160804 | orchestrator | TASK [Manage testbed-node-1] *************************************************** 2025-02-04 09:04:33.160824 | orchestrator | changed: [localhost] 2025-02-04 09:04:33.162001 | orchestrator | 2025-02-04 09:04:33.162054 | orchestrator | TASK [Manage testbed-node-2] *************************************************** 2025-02-04 09:04:35.463883 | orchestrator | changed: [localhost] 2025-02-04 09:04:37.713181 | orchestrator | 2025-02-04 09:04:37.713315 | orchestrator | TASK [Manage testbed-node-3] *************************************************** 2025-02-04 09:04:37.713353 | orchestrator | changed: [localhost] 2025-02-04 09:04:37.713713 | orchestrator | 2025-02-04 09:04:37.714768 | orchestrator | TASK [Manage testbed-node-4] *************************************************** 2025-02-04 09:04:40.047328 | orchestrator | changed: [localhost] 2025-02-04 09:04:40.048195 | orchestrator | 2025-02-04 09:04:40.048512 | orchestrator | TASK [Manage testbed-node-5] *************************************************** 2025-02-04 09:04:42.486145 | orchestrator | changed: [localhost] 2025-02-04 09:04:42.486603 | orchestrator | 2025-02-04 09:04:42.487177 | orchestrator | TASK [Manage testbed-node-6] *************************************************** 2025-02-04 09:04:44.711486 | orchestrator | changed: [localhost] 2025-02-04 09:04:44.713244 | orchestrator | 2025-02-04 09:04:44.713291 | orchestrator | TASK [Manage testbed-node-7] *************************************************** 2025-02-04 09:04:47.031037 | orchestrator | changed: [localhost] 2025-02-04 09:04:49.291972 | orchestrator | 2025-02-04 09:04:49.292112 | orchestrator | TASK [Manage testbed-node-8] *************************************************** 2025-02-04 09:04:49.292148 | orchestrator | changed: [localhost] 2025-02-04 09:04:51.929630 | orchestrator | 2025-02-04 09:04:51.929925 | orchestrator | TASK [Manage testbed-node-9] *************************************************** 2025-02-04 09:04:51.929969 | orchestrator | changed: [localhost] 2025-02-04 09:04:51.930274 | orchestrator | 2025-02-04 09:04:51.930305 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:04:51.930322 | orchestrator | 2025-02-04 09:04:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:04:51.930338 | orchestrator | 2025-02-04 09:04:51 | INFO  | Please wait and do not abort execution. 2025-02-04 09:04:51.930371 | orchestrator | localhost : ok=15 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:04:51.930463 | orchestrator | 2025-02-04 09:04:52.266689 | orchestrator | + osism netbox connect 1000 --state a 2025-02-04 09:04:53.801647 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 2f73ac0d-8bc0-4205-a376-591a3886de27 for device testbed-node-7 is running in background 2025-02-04 09:04:53.805556 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task eadeb9f0-5bda-4f35-9760-e153886d1ad0 for device testbed-node-8 is running in background 2025-02-04 09:04:53.810305 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 5b85ab41-f06a-407b-baed-dcdc800272b8 for device testbed-switch-1 is running in background 2025-02-04 09:04:53.817483 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 7d7b658b-1244-4fa2-8c63-f44503bca554 for device testbed-node-9 is running in background 2025-02-04 09:04:53.824376 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task b6e640ec-0fe8-4972-84c6-0fe924bf9ec1 for device testbed-node-3 is running in background 2025-02-04 09:04:53.825753 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 23d8a0fb-a768-4708-a005-351590027246 for device testbed-node-2 is running in background 2025-02-04 09:04:53.825823 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 6aaf3916-bd99-4ecf-803b-4a2393807d88 for device testbed-node-5 is running in background 2025-02-04 09:04:53.829429 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 8a9d25a5-f2bb-4623-b9f7-1ac5c6655af9 for device testbed-node-4 is running in background 2025-02-04 09:04:53.831510 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 064de7e5-a8be-4d39-8c93-f57b02afd259 for device testbed-manager is running in background 2025-02-04 09:04:53.833945 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 21664446-1c44-408f-a9ea-d05bf048f535 for device testbed-switch-0 is running in background 2025-02-04 09:04:53.837208 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 5fa001ff-d720-48d0-b380-982cb83c31d0 for device testbed-switch-2 is running in background 2025-02-04 09:04:53.844178 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task f7845827-7e18-4511-b1bc-4e177010e1e6 for device testbed-node-6 is running in background 2025-02-04 09:04:53.849579 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task 9bbb8b4b-b622-4e31-af8a-ef9a62b2d5cf for device testbed-node-0 is running in background 2025-02-04 09:04:53.853264 | orchestrator | 2025-02-04 09:04:53 | INFO  | Task d293b8b8-cc2e-46ba-9500-be6939d23f1c for device testbed-node-1 is running in background 2025-02-04 09:04:54.140607 | orchestrator | 2025-02-04 09:04:53 | INFO  | Tasks are running in background. No more output. Check Flower for logs. 2025-02-04 09:04:54.140747 | orchestrator | + osism netbox disable --no-wait testbed-switch-0 2025-02-04 09:04:55.843208 | orchestrator | + osism netbox disable --no-wait testbed-switch-1 2025-02-04 09:04:57.573333 | orchestrator | + osism netbox disable --no-wait testbed-switch-2 2025-02-04 09:04:59.177486 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-02-04 09:04:59.411769 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-04 09:04:59.418365 | orchestrator | ceph-ansible quay.io/osism/ceph-ansible:quincy "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418432 | orchestrator | kolla-ansible quay.io/osism/kolla-ansible:2024.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418449 | orchestrator | manager-api-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2025-02-04 09:04:59.418465 | orchestrator | manager-ara-server-1 quay.io/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2025-02-04 09:04:59.418481 | orchestrator | manager-beat-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" beat 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418495 | orchestrator | manager-conductor-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" conductor 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418510 | orchestrator | manager-flower-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" flower 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418589 | orchestrator | manager-inventory_reconciler-1 quay.io/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418606 | orchestrator | manager-listener-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" listener 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418621 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2025-02-04 09:04:59.418665 | orchestrator | manager-netbox-1 quay.io/osism/osism-netbox:latest "/usr/bin/tini -- os…" netbox 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418680 | orchestrator | manager-openstack-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" openstack 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418695 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2025-02-04 09:04:59.418710 | orchestrator | manager-watchdog-1 quay.io/osism/osism:latest "/usr/bin/tini -- os…" watchdog 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418724 | orchestrator | osism-ansible quay.io/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418738 | orchestrator | osism-kubernetes quay.io/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418752 | orchestrator | osismclient quay.io/osism/osism:latest "/usr/bin/tini -- sl…" osismclient 2 minutes ago Up 2 minutes (healthy) 2025-02-04 09:04:59.418779 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-02-04 09:04:59.558412 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-02-04 09:04:59.566500 | orchestrator | netbox-netbox-1 quay.io/osism/netbox:v4.1.10 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-02-04 09:04:59.566604 | orchestrator | netbox-netbox-worker-1 quay.io/osism/netbox:v4.1.10 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-02-04 09:04:59.566630 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-02-04 09:04:59.566655 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-02-04 09:04:59.566692 | orchestrator | ++ semver latest 7.0.0 2025-02-04 09:04:59.619286 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-04 09:04:59.620615 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 09:04:59.620658 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-02-04 09:04:59.620683 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-02-04 09:05:00.906606 | orchestrator | 2025-02-04 09:05:00 | INFO  | Task 25270cba-0da0-463b-b787-098fd031598c (resolvconf) was prepared for execution. 2025-02-04 09:05:03.464374 | orchestrator | 2025-02-04 09:05:00 | INFO  | It takes a moment until task 25270cba-0da0-463b-b787-098fd031598c (resolvconf) has been started and output is visible here. 2025-02-04 09:05:03.464676 | orchestrator | 2025-02-04 09:05:03.464715 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-02-04 09:05:03.464736 | orchestrator | 2025-02-04 09:05:03.464759 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 09:05:03.464785 | orchestrator | Tuesday 04 February 2025 09:05:03 +0000 (0:00:00.070) 0:00:00.070 ****** 2025-02-04 09:05:06.599985 | orchestrator | ok: [testbed-manager] 2025-02-04 09:05:06.656799 | orchestrator | 2025-02-04 09:05:06.656898 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-04 09:05:06.656911 | orchestrator | Tuesday 04 February 2025 09:05:06 +0000 (0:00:03.133) 0:00:03.204 ****** 2025-02-04 09:05:06.656936 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:06.748446 | orchestrator | 2025-02-04 09:05:06.748594 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-04 09:05:06.748646 | orchestrator | Tuesday 04 February 2025 09:05:06 +0000 (0:00:00.059) 0:00:03.264 ****** 2025-02-04 09:05:06.748678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-02-04 09:05:06.752637 | orchestrator | 2025-02-04 09:05:06.752687 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-04 09:05:06.752707 | orchestrator | Tuesday 04 February 2025 09:05:06 +0000 (0:00:00.088) 0:00:03.352 ****** 2025-02-04 09:05:06.835669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-02-04 09:05:06.837721 | orchestrator | 2025-02-04 09:05:06.837767 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-04 09:05:07.775045 | orchestrator | Tuesday 04 February 2025 09:05:06 +0000 (0:00:00.088) 0:00:03.441 ****** 2025-02-04 09:05:07.775211 | orchestrator | ok: [testbed-manager] 2025-02-04 09:05:07.818189 | orchestrator | 2025-02-04 09:05:07.818301 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-04 09:05:07.818320 | orchestrator | Tuesday 04 February 2025 09:05:07 +0000 (0:00:00.940) 0:00:04.381 ****** 2025-02-04 09:05:07.818353 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:08.272482 | orchestrator | 2025-02-04 09:05:08.272648 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-04 09:05:08.272672 | orchestrator | Tuesday 04 February 2025 09:05:07 +0000 (0:00:00.042) 0:00:04.424 ****** 2025-02-04 09:05:08.272705 | orchestrator | ok: [testbed-manager] 2025-02-04 09:05:08.328859 | orchestrator | 2025-02-04 09:05:08.328945 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-04 09:05:08.328953 | orchestrator | Tuesday 04 February 2025 09:05:08 +0000 (0:00:00.454) 0:00:04.879 ****** 2025-02-04 09:05:08.328971 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:08.792307 | orchestrator | 2025-02-04 09:05:08.792432 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-04 09:05:08.792456 | orchestrator | Tuesday 04 February 2025 09:05:08 +0000 (0:00:00.057) 0:00:04.937 ****** 2025-02-04 09:05:08.792489 | orchestrator | changed: [testbed-manager] 2025-02-04 09:05:09.675776 | orchestrator | 2025-02-04 09:05:09.675899 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-04 09:05:09.675919 | orchestrator | Tuesday 04 February 2025 09:05:08 +0000 (0:00:00.456) 0:00:05.393 ****** 2025-02-04 09:05:09.675952 | orchestrator | changed: [testbed-manager] 2025-02-04 09:05:09.676517 | orchestrator | 2025-02-04 09:05:09.676764 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-04 09:05:09.676795 | orchestrator | Tuesday 04 February 2025 09:05:09 +0000 (0:00:00.886) 0:00:06.280 ****** 2025-02-04 09:05:10.563913 | orchestrator | ok: [testbed-manager] 2025-02-04 09:05:10.629566 | orchestrator | 2025-02-04 09:05:10.629689 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-04 09:05:10.629712 | orchestrator | Tuesday 04 February 2025 09:05:10 +0000 (0:00:00.887) 0:00:07.167 ****** 2025-02-04 09:05:10.629751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-02-04 09:05:10.630002 | orchestrator | 2025-02-04 09:05:10.630178 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-04 09:05:11.876933 | orchestrator | Tuesday 04 February 2025 09:05:10 +0000 (0:00:00.069) 0:00:07.237 ****** 2025-02-04 09:05:11.877077 | orchestrator | changed: [testbed-manager] 2025-02-04 09:05:11.877577 | orchestrator | 2025-02-04 09:05:11.877617 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:05:11.877911 | orchestrator | 2025-02-04 09:05:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:05:11.877943 | orchestrator | 2025-02-04 09:05:11 | INFO  | Please wait and do not abort execution. 2025-02-04 09:05:11.877996 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:05:11.880905 | orchestrator | 2025-02-04 09:05:11.882253 | orchestrator | 2025-02-04 09:05:11.882400 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:05:11.882755 | orchestrator | Tuesday 04 February 2025 09:05:11 +0000 (0:00:01.244) 0:00:08.482 ****** 2025-02-04 09:05:11.882814 | orchestrator | =============================================================================== 2025-02-04 09:05:11.882830 | orchestrator | Gathering Facts --------------------------------------------------------- 3.13s 2025-02-04 09:05:11.882855 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.24s 2025-02-04 09:05:11.883089 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.94s 2025-02-04 09:05:11.883180 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.89s 2025-02-04 09:05:11.883450 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.89s 2025-02-04 09:05:11.883481 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.46s 2025-02-04 09:05:11.886334 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.45s 2025-02-04 09:05:11.886489 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-02-04 09:05:11.886542 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-02-04 09:05:11.886784 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-02-04 09:05:11.886876 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-02-04 09:05:11.887228 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2025-02-04 09:05:12.333849 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2025-02-04 09:05:12.333941 | orchestrator | + osism apply sshconfig 2025-02-04 09:05:13.844040 | orchestrator | 2025-02-04 09:05:13 | INFO  | Task 0a5cb616-449b-4287-9ec1-10158fc7b510 (sshconfig) was prepared for execution. 2025-02-04 09:05:16.999097 | orchestrator | 2025-02-04 09:05:13 | INFO  | It takes a moment until task 0a5cb616-449b-4287-9ec1-10158fc7b510 (sshconfig) has been started and output is visible here. 2025-02-04 09:05:16.999389 | orchestrator | 2025-02-04 09:05:17.493420 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-02-04 09:05:17.493590 | orchestrator | 2025-02-04 09:05:17.493624 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-02-04 09:05:17.493640 | orchestrator | Tuesday 04 February 2025 09:05:16 +0000 (0:00:00.099) 0:00:00.099 ****** 2025-02-04 09:05:17.493673 | orchestrator | ok: [testbed-manager] 2025-02-04 09:05:17.915058 | orchestrator | 2025-02-04 09:05:17.915222 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-02-04 09:05:17.915250 | orchestrator | Tuesday 04 February 2025 09:05:17 +0000 (0:00:00.495) 0:00:00.594 ****** 2025-02-04 09:05:17.915952 | orchestrator | changed: [testbed-manager] 2025-02-04 09:05:22.878913 | orchestrator | 2025-02-04 09:05:22.879048 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-02-04 09:05:22.879069 | orchestrator | Tuesday 04 February 2025 09:05:17 +0000 (0:00:00.421) 0:00:01.016 ****** 2025-02-04 09:05:22.879102 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-02-04 09:05:22.880959 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-02-04 09:05:22.881261 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-02-04 09:05:22.881493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-02-04 09:05:22.881544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-02-04 09:05:22.881582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-02-04 09:05:22.881698 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-02-04 09:05:22.882068 | orchestrator | 2025-02-04 09:05:22.882206 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-02-04 09:05:22.882583 | orchestrator | Tuesday 04 February 2025 09:05:22 +0000 (0:00:04.964) 0:00:05.980 ****** 2025-02-04 09:05:22.938137 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:22.938318 | orchestrator | 2025-02-04 09:05:22.938339 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-02-04 09:05:22.938359 | orchestrator | Tuesday 04 February 2025 09:05:22 +0000 (0:00:00.059) 0:00:06.039 ****** 2025-02-04 09:05:23.388678 | orchestrator | changed: [testbed-manager] 2025-02-04 09:05:23.388985 | orchestrator | 2025-02-04 09:05:23.389001 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:05:23.389009 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:05:23.389016 | orchestrator | 2025-02-04 09:05:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:05:23.389022 | orchestrator | 2025-02-04 09:05:23 | INFO  | Please wait and do not abort execution. 2025-02-04 09:05:23.389028 | orchestrator | 2025-02-04 09:05:23.389036 | orchestrator | 2025-02-04 09:05:23.389143 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:05:23.389437 | orchestrator | Tuesday 04 February 2025 09:05:23 +0000 (0:00:00.451) 0:00:06.491 ****** 2025-02-04 09:05:23.389697 | orchestrator | =============================================================================== 2025-02-04 09:05:23.389958 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.96s 2025-02-04 09:05:23.390247 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.50s 2025-02-04 09:05:23.390495 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.45s 2025-02-04 09:05:23.390800 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2025-02-04 09:05:23.391041 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-02-04 09:05:23.655968 | orchestrator | + osism apply known-hosts 2025-02-04 09:05:24.986693 | orchestrator | 2025-02-04 09:05:24 | INFO  | Task c450ea01-c63b-4b08-9a38-5801fb681148 (known-hosts) was prepared for execution. 2025-02-04 09:05:27.933001 | orchestrator | 2025-02-04 09:05:24 | INFO  | It takes a moment until task c450ea01-c63b-4b08-9a38-5801fb681148 (known-hosts) has been started and output is visible here. 2025-02-04 09:05:27.933162 | orchestrator | 2025-02-04 09:05:27.935200 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-02-04 09:05:27.935479 | orchestrator | 2025-02-04 09:05:27.935855 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-02-04 09:05:27.938132 | orchestrator | Tuesday 04 February 2025 09:05:27 +0000 (0:00:00.120) 0:00:00.121 ****** 2025-02-04 09:05:33.407589 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-04 09:05:33.407995 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-04 09:05:33.408631 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-04 09:05:33.408950 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-04 09:05:33.410292 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-04 09:05:33.411274 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-04 09:05:33.411757 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-04 09:05:33.412367 | orchestrator | 2025-02-04 09:05:33.413016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-02-04 09:05:33.413426 | orchestrator | Tuesday 04 February 2025 09:05:33 +0000 (0:00:05.472) 0:00:05.593 ****** 2025-02-04 09:05:33.608096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-04 09:05:33.609650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-04 09:05:33.611072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-04 09:05:33.612011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-04 09:05:33.612956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-04 09:05:33.614264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-04 09:05:33.614594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-04 09:05:33.614620 | orchestrator | 2025-02-04 09:05:33.615089 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:33.616647 | orchestrator | Tuesday 04 February 2025 09:05:33 +0000 (0:00:00.202) 0:00:05.795 ****** 2025-02-04 09:05:34.829343 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6AdRuggQlKPIoKfmX7K9200A+CV9029E54onkx0A8wcLuac1cFXIZTMZmSOzU9VDYPanziskmGm3QzELv0FWK+O8tcEK16Lyg8G0DraUsPkjYpVMpiq/K/MCTNJ6VUYXbqi80eqiOIhtDr9ralQ4NYpbVWiYLnINA4KiH79g4gx2cBum2hxGd6/vOpgnkYMlTXTL6NTRVzzBX0WpCVeyf4VOdMoUh8pSnMkQPJt3FXx3rRxC5n0EJdgx0XTJ2yqzcMOf0ycl+pDZbGOaAbE/CVhh2PW+sLWs5TdpEIqBINbaZs5l1uPI3vddNqO/v9N9YM3d5G+4BmJWQHE8U0MLDe3lN0q6cONP+EEn1ppAh44EnrSD4zNEwJp3Bw2XCNTRKh5/vf1GLRSKH1FGdyOaUe2M07pv9bj9qSKkoZHQn30XjCmCip5EB53036J292fOzzFqnhtUN94VPVc7U3jpt9/2BsFeVaiVzWJUdxRgWCsymC449HVsl8J51TZo96mE=) 2025-02-04 09:05:34.829493 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFAGkpzUNqMsE8Bu1kJVbIs4dRrVSoHMfkUuGnr6s3WfLhEODVtH/yYmqKYArG+Mb39fnOnQ54Bc14DfNLdScZo=) 2025-02-04 09:05:34.830423 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbY1cO/9T9o1NOmmgpRlL4/rRkQH7SEMFY7nsI70zdf) 2025-02-04 09:05:34.831097 | orchestrator | 2025-02-04 09:05:34.831744 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:34.832680 | orchestrator | Tuesday 04 February 2025 09:05:34 +0000 (0:00:01.220) 0:00:07.016 ****** 2025-02-04 09:05:35.952206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJ0k+Ko04IrwSORTeCU0BVhemeanlGqa07F/c9HZHFX) 2025-02-04 09:05:35.952744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdIl5I1HnwFr8iGr6HwRp2PWfA3skpIxIz/7kheO20jN+oAcJwoFaM/zATiIvW8TG93z/+7kW0akVIqRA+JVEwbTxK5ynxIHeg6WnO6BnPC7IT9uhaQM/psXzed2ZmNUlEtU91QOU3Zvc388pUeHMzpbVj+0vXtd+B8YtX9TBpCkVavf81qdfv3w0ZCq2K2jDRFvFz6GTaKkwZ5V5iqx64NlwIvLbyKxyQupAw20W/2xCGJ3abo+XY7labJRjMX1bUDJ9FQw9ITCNPZrxVnc9RJHP6BWZPxTESjMGcK177yYP44xZC1gx7rRhVT5fu7n1fVGYz8PL7HJauRoYGXb8/J8WpndvNX7WiU39rVfzwesUZcUIpudSww/VWYD9U88SGxinWI1ukSPwlLj4u2LeYc98ZrWnmLv4w0uzIkSZ8Or49ToRPsRkuBqGgdR+MhxS7qG6Ea64rSvfKtVA4qT2NHPnXGppfu8fXadtRh+d60re5veMyfHKKIeaJlwppQZs=) 2025-02-04 09:05:35.953506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPBjOIgjwEtmxUQpvKVe6VmbnORvLhVtvr0SQFkSGOVuy6WLD/YJP+LpNnAOjFnowc+vSu7eId1C068GaKmIhtg=) 2025-02-04 09:05:35.953619 | orchestrator | 2025-02-04 09:05:35.955794 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:35.957904 | orchestrator | Tuesday 04 February 2025 09:05:35 +0000 (0:00:01.122) 0:00:08.138 ****** 2025-02-04 09:05:37.046066 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOhSaEb5SfamnN5uk8Sp0R3NzNKOyRyEa83JvT6is8qfOjs4x1+h6rqtg/wkxgVgKfsIdoYH/CS/cA0ORdslWiUyUDtDjqCqx8vQtPKYbzaDcLQSFpigo8/IsZW6PYupmLzn7UR+4y6BYWefG9WfzBITQpt9JOLJpwetJQc/wtEDSsTKcV3XebeuVHQj1pX30pxXy1gt+hkAhOl2dqzcqcicOMAGOwrYg4nEYidDFaQeRj6ySgKhCyPyPSVc7xKaL7Gqufuf2rLHsRPqGGhF0E2Hfy/ZSJbzo2YHl6zYvX+FlVZQIF6kTMzlC6vDb3EZpU3nDBt+iuYjAqQHcEhUl6BBu93Jab0CMgb8FJQEj1xKUstlr6Hmfjy6CWGb1lsWzgdz31V36zqeImf9U2780qexbjtP64cZdFXFA5V9ER+z2Tsdl7Xm0Fczn3E62sr0XT7b6PJQBk6Cjf1orFLmUcnAmnZfKvJrN1Gxxu2XHKgg0sEYTBLqJv8GG4cBLEmlc=) 2025-02-04 09:05:37.047052 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7keqhhyYrVbc/E9Ti9ydnDnh9gCBM1iX/XaHeMV6XG) 2025-02-04 09:05:37.048147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIN+IuZLswXofVXod+GuP9/ZConrrtlyK3BX51wPM+hn/WZORtwS7BODVlT4RBZC0vTT31+90s4/KYAGTRN2JIo=) 2025-02-04 09:05:37.048186 | orchestrator | 2025-02-04 09:05:37.049008 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:37.049745 | orchestrator | Tuesday 04 February 2025 09:05:37 +0000 (0:00:01.094) 0:00:09.233 ****** 2025-02-04 09:05:38.152021 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSOluzVjymrGM7J1aTXpfOkHeqPbLIuDyBlEV26MqbRFzpTvFGh0HsAVqkydGyo8N2gpOFD/hB//AXBqNimHWlALaZSx4rz3+Q2JjSp8b5hMt+9Bq/dLya8r7CPmlxWwOAdt8oCH98Wh6+JiVi4z79PU0x1w7ZVzLdnAPi833EhAKL31g5p2Lv+bYk2/Ms3w+wVWhXhwrwmn/OU2G5Gixg5DczIrxwtyEGwB9Hjcs3Wj+TKCFSTZcQDawMH/9V/qAGWLPfcJt2/C1PqNqg4/13SKSZsMEahZvNbvlqCBUB73qPJXLmaK5P7u01F4AzCL4RkLYfBf9C497Je07u/vMgj3g6zhVEe22fijCMh2YqMiLEEcePwAjDhaBPkoYaIQSEzyk7UQJeht9ZDAoWvU29Sp872kWgvwrGzCezxGtBXdVaASbwftXxLSRi12GZ1p3dqZXJorFuIwnpSuQytP1k1s1x1KNXXMjvjWzyUiUL1sZ7Fo7YALu+MAdwRcrsxSs=) 2025-02-04 09:05:38.152566 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBgDi3GCmQdxg48MeUkq1Z93Ks2hOGvrL1zzdE1YtU1/NlWCwc1UGWYZ9qdkijR5wvYTo06+qfMuByQojHDhjDM=) 2025-02-04 09:05:38.153164 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHA5D/h8nbHx1hKFnrdF/U6a9TLar8h+jXuRJQWvHdbM) 2025-02-04 09:05:38.154195 | orchestrator | 2025-02-04 09:05:38.154945 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:38.155766 | orchestrator | Tuesday 04 February 2025 09:05:38 +0000 (0:00:01.105) 0:00:10.338 ****** 2025-02-04 09:05:39.242751 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3tiPjM0QEUPzwsd2Jn8w5PT8ryBjsgmHbh8QE53Ab38KQya/WELLG121gdv2EgAEj341hoCJynHtyzFMEZSylH+xgM7yl54YYwfL3tEsWezqvRpuMr9sp459yzX5Ibl2Trer62nRAwI1KHnG1r06jgRPrdAL6sfUwWv1LKKD982whJwKhOOTUF0uFjr7LK6M6sDWmaAiDAEYPoKPMd507oHLz+jOFCip+F48DsUGGoH8NZMOAwGyQGtk2TO1i17qTUk8kYlBcF+Ap0fITuhmzZhp+z141dXIhu1y9lg891B6wQ+E6PX4lWLwrARtmOzrcPxD662/dX9URmvmNrus6UeO1GA1HbKKAY8b26FXMKlcepTTrVjtbv3bNYr84cVE6eGnI1VN+ynku8mEPi7ksSi2FL8LzmBt71eIKFTBFV2uoAH0xfZtt0Y00JEVG/DSJShnBtxQ7CVilQdztqCnqANsHx5Xoy7u7Tj/vbJpHFbOQ8hy3CDVc9xMkWsgVxzE=) 2025-02-04 09:05:39.244072 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB8HmDSdOszIlhac5RpWF3zsm7VgePSVfyiXYyI2ny98QNbkIibcGv6maRwCkEmULK3MV1DvBqZr7B/ZxeHqn6c=) 2025-02-04 09:05:39.244471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMx0sffrPCMPg/LSVNWzfkqIXFyE1e+4E+n992tVrldp) 2025-02-04 09:05:39.245216 | orchestrator | 2025-02-04 09:05:39.245849 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:39.246692 | orchestrator | Tuesday 04 February 2025 09:05:39 +0000 (0:00:01.091) 0:00:11.430 ****** 2025-02-04 09:05:40.346899 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd+oTwKtA3erHP8wr9WaQCmztVNcN0AvTuYItgoflHSy3QbNzXqFDz0hR/VHmB20+AATx6ZCm718V8KxHzUzj7QH4wtWWI26sQEbjsoW9DccwEoeVEbpINe/y65AnuhNWaXC31G+U2DSremWgdNWykElbRY7Nt6pbwHLI7TtDC5KxuaUitSEfJfVkdg83xZOG/AehY8kHEun6saapk9kMIyqdXBTm7lfeoTJBiuCa05DyVUn7NQPIF3KAYGAvp6Zp6d/oH0ZzEww80NvQXHbOBfz3OK7w5p6hVYQnXMKBSYgbegsck4SKXJQHgsBemvx2l5mDdXoKTFqc9uGM60mzKUmRSXRj9tsiZyPom5PUUgVpGmoHZfh5hECSMbc8q5ZjR8aaL/t/4btnxcKuRy87PfkwFEQjP2mstTOQTLrw59FNx49U7rqwGHTgdDYocEQncEr4ZVfL6HAvVtb157pSyAwk8TNPOnbgv+hhEdtCsmtVZ6Z1OV98Km8wSiEK1qeU=) 2025-02-04 09:05:40.347030 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgU7AdXcjZgE344H15f2doiIuicStoiGk6sje0M1Lx5sGljTrrD/MiBUi4qGGj+C1XZ3yowCdWphCC8paLfx2M=) 2025-02-04 09:05:40.347047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEkTTeEFRzA1kcg0PO5viVGs7W5r2LSiZ/vA1XptQ77) 2025-02-04 09:05:40.347227 | orchestrator | 2025-02-04 09:05:40.347625 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:40.347811 | orchestrator | Tuesday 04 February 2025 09:05:40 +0000 (0:00:01.104) 0:00:12.534 ****** 2025-02-04 09:05:41.422406 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr1TiJzxTgCbTu+P8yB6VAYRcS35rZnZM3I3h/oN4vlFkyJNAsqb3zDa64R9MhX71aTowm5n3S+ZOQiH4WpQ4HN/nB3jHKJVKE6nKVLGxumLZcViAI1kZX+iZxap6Q9oCd3hGJXvkQ65GkVzv2d7HLe9s932TiqRCbVBMaxdTTydYmzU9ICC/tyBDWGs7xUZYq7jTyS/PTNirlX6MjOGSxrTvbB9VOKTHM367MZz9UvdQpIvycm/TtFsHOtkcl3cQu8IWyNXje63MijeWdc2fL8KB0go30jRvGOV9TKZQvmqkB/ftQUCH15SY+9NPX/fFVUg3C/JFzLytZgtc2IamGBoPiuAwq+kgJ7fRE5HbA+zcUJB2NgsiExcICepYcmKpZOFYPDW2ymbD2emRf9lwIWsUddSHSJbyQ8tM80NlBSFZNsxzLOpL0/2EZC0dj6BVBs+8V0KtUL/vTX7/Z4NxjryMZwZWMoFK/5X3DpRhAMw0qgKw7ZoNnZgaByq54hSc=) 2025-02-04 09:05:41.422756 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA0y7GIquQHZt8GzxhVYB5Mz+FnGhDy5hPTZaT+HYJpfcrntEZJdjY376kBQmtmDU+jPlK8HFwc9c9ImALzhHkU=) 2025-02-04 09:05:41.422817 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEncizPxZql84YK43ljVllRoMke3PA42kgerHzAVH2cD) 2025-02-04 09:05:41.422844 | orchestrator | 2025-02-04 09:05:41.422873 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-02-04 09:05:41.424013 | orchestrator | Tuesday 04 February 2025 09:05:41 +0000 (0:00:01.075) 0:00:13.610 ****** 2025-02-04 09:05:46.670902 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-02-04 09:05:46.672182 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-02-04 09:05:46.672207 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-02-04 09:05:46.672213 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-02-04 09:05:46.672223 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-02-04 09:05:46.673210 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-02-04 09:05:46.673866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-02-04 09:05:46.674400 | orchestrator | 2025-02-04 09:05:46.674958 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-02-04 09:05:46.675482 | orchestrator | Tuesday 04 February 2025 09:05:46 +0000 (0:00:05.244) 0:00:18.855 ****** 2025-02-04 09:05:46.873613 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-02-04 09:05:46.874835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-02-04 09:05:46.874925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-02-04 09:05:46.875656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-02-04 09:05:46.876438 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-02-04 09:05:46.876962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-02-04 09:05:46.877436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-02-04 09:05:46.878001 | orchestrator | 2025-02-04 09:05:46.880448 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:48.058647 | orchestrator | Tuesday 04 February 2025 09:05:46 +0000 (0:00:00.205) 0:00:19.060 ****** 2025-02-04 09:05:48.058792 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6AdRuggQlKPIoKfmX7K9200A+CV9029E54onkx0A8wcLuac1cFXIZTMZmSOzU9VDYPanziskmGm3QzELv0FWK+O8tcEK16Lyg8G0DraUsPkjYpVMpiq/K/MCTNJ6VUYXbqi80eqiOIhtDr9ralQ4NYpbVWiYLnINA4KiH79g4gx2cBum2hxGd6/vOpgnkYMlTXTL6NTRVzzBX0WpCVeyf4VOdMoUh8pSnMkQPJt3FXx3rRxC5n0EJdgx0XTJ2yqzcMOf0ycl+pDZbGOaAbE/CVhh2PW+sLWs5TdpEIqBINbaZs5l1uPI3vddNqO/v9N9YM3d5G+4BmJWQHE8U0MLDe3lN0q6cONP+EEn1ppAh44EnrSD4zNEwJp3Bw2XCNTRKh5/vf1GLRSKH1FGdyOaUe2M07pv9bj9qSKkoZHQn30XjCmCip5EB53036J292fOzzFqnhtUN94VPVc7U3jpt9/2BsFeVaiVzWJUdxRgWCsymC449HVsl8J51TZo96mE=) 2025-02-04 09:05:48.059479 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFAGkpzUNqMsE8Bu1kJVbIs4dRrVSoHMfkUuGnr6s3WfLhEODVtH/yYmqKYArG+Mb39fnOnQ54Bc14DfNLdScZo=) 2025-02-04 09:05:48.059623 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINbY1cO/9T9o1NOmmgpRlL4/rRkQH7SEMFY7nsI70zdf) 2025-02-04 09:05:48.060064 | orchestrator | 2025-02-04 09:05:48.060157 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:48.060545 | orchestrator | Tuesday 04 February 2025 09:05:48 +0000 (0:00:01.184) 0:00:20.245 ****** 2025-02-04 09:05:49.129708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdIl5I1HnwFr8iGr6HwRp2PWfA3skpIxIz/7kheO20jN+oAcJwoFaM/zATiIvW8TG93z/+7kW0akVIqRA+JVEwbTxK5ynxIHeg6WnO6BnPC7IT9uhaQM/psXzed2ZmNUlEtU91QOU3Zvc388pUeHMzpbVj+0vXtd+B8YtX9TBpCkVavf81qdfv3w0ZCq2K2jDRFvFz6GTaKkwZ5V5iqx64NlwIvLbyKxyQupAw20W/2xCGJ3abo+XY7labJRjMX1bUDJ9FQw9ITCNPZrxVnc9RJHP6BWZPxTESjMGcK177yYP44xZC1gx7rRhVT5fu7n1fVGYz8PL7HJauRoYGXb8/J8WpndvNX7WiU39rVfzwesUZcUIpudSww/VWYD9U88SGxinWI1ukSPwlLj4u2LeYc98ZrWnmLv4w0uzIkSZ8Or49ToRPsRkuBqGgdR+MhxS7qG6Ea64rSvfKtVA4qT2NHPnXGppfu8fXadtRh+d60re5veMyfHKKIeaJlwppQZs=) 2025-02-04 09:05:49.130007 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPBjOIgjwEtmxUQpvKVe6VmbnORvLhVtvr0SQFkSGOVuy6WLD/YJP+LpNnAOjFnowc+vSu7eId1C068GaKmIhtg=) 2025-02-04 09:05:49.130070 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJ0k+Ko04IrwSORTeCU0BVhemeanlGqa07F/c9HZHFX) 2025-02-04 09:05:49.130083 | orchestrator | 2025-02-04 09:05:49.130094 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:49.130110 | orchestrator | Tuesday 04 February 2025 09:05:49 +0000 (0:00:01.069) 0:00:21.315 ****** 2025-02-04 09:05:50.187673 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOhSaEb5SfamnN5uk8Sp0R3NzNKOyRyEa83JvT6is8qfOjs4x1+h6rqtg/wkxgVgKfsIdoYH/CS/cA0ORdslWiUyUDtDjqCqx8vQtPKYbzaDcLQSFpigo8/IsZW6PYupmLzn7UR+4y6BYWefG9WfzBITQpt9JOLJpwetJQc/wtEDSsTKcV3XebeuVHQj1pX30pxXy1gt+hkAhOl2dqzcqcicOMAGOwrYg4nEYidDFaQeRj6ySgKhCyPyPSVc7xKaL7Gqufuf2rLHsRPqGGhF0E2Hfy/ZSJbzo2YHl6zYvX+FlVZQIF6kTMzlC6vDb3EZpU3nDBt+iuYjAqQHcEhUl6BBu93Jab0CMgb8FJQEj1xKUstlr6Hmfjy6CWGb1lsWzgdz31V36zqeImf9U2780qexbjtP64cZdFXFA5V9ER+z2Tsdl7Xm0Fczn3E62sr0XT7b6PJQBk6Cjf1orFLmUcnAmnZfKvJrN1Gxxu2XHKgg0sEYTBLqJv8GG4cBLEmlc=) 2025-02-04 09:05:50.187936 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIN+IuZLswXofVXod+GuP9/ZConrrtlyK3BX51wPM+hn/WZORtwS7BODVlT4RBZC0vTT31+90s4/KYAGTRN2JIo=) 2025-02-04 09:05:50.187968 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB7keqhhyYrVbc/E9Ti9ydnDnh9gCBM1iX/XaHeMV6XG) 2025-02-04 09:05:50.187993 | orchestrator | 2025-02-04 09:05:50.188158 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:50.188712 | orchestrator | Tuesday 04 February 2025 09:05:50 +0000 (0:00:01.057) 0:00:22.372 ****** 2025-02-04 09:05:51.255269 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSOluzVjymrGM7J1aTXpfOkHeqPbLIuDyBlEV26MqbRFzpTvFGh0HsAVqkydGyo8N2gpOFD/hB//AXBqNimHWlALaZSx4rz3+Q2JjSp8b5hMt+9Bq/dLya8r7CPmlxWwOAdt8oCH98Wh6+JiVi4z79PU0x1w7ZVzLdnAPi833EhAKL31g5p2Lv+bYk2/Ms3w+wVWhXhwrwmn/OU2G5Gixg5DczIrxwtyEGwB9Hjcs3Wj+TKCFSTZcQDawMH/9V/qAGWLPfcJt2/C1PqNqg4/13SKSZsMEahZvNbvlqCBUB73qPJXLmaK5P7u01F4AzCL4RkLYfBf9C497Je07u/vMgj3g6zhVEe22fijCMh2YqMiLEEcePwAjDhaBPkoYaIQSEzyk7UQJeht9ZDAoWvU29Sp872kWgvwrGzCezxGtBXdVaASbwftXxLSRi12GZ1p3dqZXJorFuIwnpSuQytP1k1s1x1KNXXMjvjWzyUiUL1sZ7Fo7YALu+MAdwRcrsxSs=) 2025-02-04 09:05:51.255603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBgDi3GCmQdxg48MeUkq1Z93Ks2hOGvrL1zzdE1YtU1/NlWCwc1UGWYZ9qdkijR5wvYTo06+qfMuByQojHDhjDM=) 2025-02-04 09:05:51.255652 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHA5D/h8nbHx1hKFnrdF/U6a9TLar8h+jXuRJQWvHdbM) 2025-02-04 09:05:51.256431 | orchestrator | 2025-02-04 09:05:51.256832 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:51.257792 | orchestrator | Tuesday 04 February 2025 09:05:51 +0000 (0:00:01.068) 0:00:23.441 ****** 2025-02-04 09:05:52.351177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3tiPjM0QEUPzwsd2Jn8w5PT8ryBjsgmHbh8QE53Ab38KQya/WELLG121gdv2EgAEj341hoCJynHtyzFMEZSylH+xgM7yl54YYwfL3tEsWezqvRpuMr9sp459yzX5Ibl2Trer62nRAwI1KHnG1r06jgRPrdAL6sfUwWv1LKKD982whJwKhOOTUF0uFjr7LK6M6sDWmaAiDAEYPoKPMd507oHLz+jOFCip+F48DsUGGoH8NZMOAwGyQGtk2TO1i17qTUk8kYlBcF+Ap0fITuhmzZhp+z141dXIhu1y9lg891B6wQ+E6PX4lWLwrARtmOzrcPxD662/dX9URmvmNrus6UeO1GA1HbKKAY8b26FXMKlcepTTrVjtbv3bNYr84cVE6eGnI1VN+ynku8mEPi7ksSi2FL8LzmBt71eIKFTBFV2uoAH0xfZtt0Y00JEVG/DSJShnBtxQ7CVilQdztqCnqANsHx5Xoy7u7Tj/vbJpHFbOQ8hy3CDVc9xMkWsgVxzE=) 2025-02-04 09:05:52.352205 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB8HmDSdOszIlhac5RpWF3zsm7VgePSVfyiXYyI2ny98QNbkIibcGv6maRwCkEmULK3MV1DvBqZr7B/ZxeHqn6c=) 2025-02-04 09:05:52.352837 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMx0sffrPCMPg/LSVNWzfkqIXFyE1e+4E+n992tVrldp) 2025-02-04 09:05:52.353814 | orchestrator | 2025-02-04 09:05:52.354842 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:52.355399 | orchestrator | Tuesday 04 February 2025 09:05:52 +0000 (0:00:01.096) 0:00:24.538 ****** 2025-02-04 09:05:53.456008 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd+oTwKtA3erHP8wr9WaQCmztVNcN0AvTuYItgoflHSy3QbNzXqFDz0hR/VHmB20+AATx6ZCm718V8KxHzUzj7QH4wtWWI26sQEbjsoW9DccwEoeVEbpINe/y65AnuhNWaXC31G+U2DSremWgdNWykElbRY7Nt6pbwHLI7TtDC5KxuaUitSEfJfVkdg83xZOG/AehY8kHEun6saapk9kMIyqdXBTm7lfeoTJBiuCa05DyVUn7NQPIF3KAYGAvp6Zp6d/oH0ZzEww80NvQXHbOBfz3OK7w5p6hVYQnXMKBSYgbegsck4SKXJQHgsBemvx2l5mDdXoKTFqc9uGM60mzKUmRSXRj9tsiZyPom5PUUgVpGmoHZfh5hECSMbc8q5ZjR8aaL/t/4btnxcKuRy87PfkwFEQjP2mstTOQTLrw59FNx49U7rqwGHTgdDYocEQncEr4ZVfL6HAvVtb157pSyAwk8TNPOnbgv+hhEdtCsmtVZ6Z1OV98Km8wSiEK1qeU=) 2025-02-04 09:05:53.456329 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgU7AdXcjZgE344H15f2doiIuicStoiGk6sje0M1Lx5sGljTrrD/MiBUi4qGGj+C1XZ3yowCdWphCC8paLfx2M=) 2025-02-04 09:05:53.456391 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEkTTeEFRzA1kcg0PO5viVGs7W5r2LSiZ/vA1XptQ77) 2025-02-04 09:05:53.456648 | orchestrator | 2025-02-04 09:05:53.457076 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-02-04 09:05:53.457422 | orchestrator | Tuesday 04 February 2025 09:05:53 +0000 (0:00:01.103) 0:00:25.641 ****** 2025-02-04 09:05:54.546186 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCr1TiJzxTgCbTu+P8yB6VAYRcS35rZnZM3I3h/oN4vlFkyJNAsqb3zDa64R9MhX71aTowm5n3S+ZOQiH4WpQ4HN/nB3jHKJVKE6nKVLGxumLZcViAI1kZX+iZxap6Q9oCd3hGJXvkQ65GkVzv2d7HLe9s932TiqRCbVBMaxdTTydYmzU9ICC/tyBDWGs7xUZYq7jTyS/PTNirlX6MjOGSxrTvbB9VOKTHM367MZz9UvdQpIvycm/TtFsHOtkcl3cQu8IWyNXje63MijeWdc2fL8KB0go30jRvGOV9TKZQvmqkB/ftQUCH15SY+9NPX/fFVUg3C/JFzLytZgtc2IamGBoPiuAwq+kgJ7fRE5HbA+zcUJB2NgsiExcICepYcmKpZOFYPDW2ymbD2emRf9lwIWsUddSHSJbyQ8tM80NlBSFZNsxzLOpL0/2EZC0dj6BVBs+8V0KtUL/vTX7/Z4NxjryMZwZWMoFK/5X3DpRhAMw0qgKw7ZoNnZgaByq54hSc=) 2025-02-04 09:05:54.546549 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA0y7GIquQHZt8GzxhVYB5Mz+FnGhDy5hPTZaT+HYJpfcrntEZJdjY376kBQmtmDU+jPlK8HFwc9c9ImALzhHkU=) 2025-02-04 09:05:54.546589 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEncizPxZql84YK43ljVllRoMke3PA42kgerHzAVH2cD) 2025-02-04 09:05:54.547493 | orchestrator | 2025-02-04 09:05:54.547887 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-02-04 09:05:54.548492 | orchestrator | Tuesday 04 February 2025 09:05:54 +0000 (0:00:01.090) 0:00:26.732 ****** 2025-02-04 09:05:54.736422 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-04 09:05:54.737624 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-04 09:05:54.739095 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-04 09:05:54.739305 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-04 09:05:54.739343 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-04 09:05:54.739372 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-04 09:05:54.739968 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-04 09:05:54.740664 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:54.740989 | orchestrator | 2025-02-04 09:05:54.741314 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-02-04 09:05:54.741694 | orchestrator | Tuesday 04 February 2025 09:05:54 +0000 (0:00:00.192) 0:00:26.924 ****** 2025-02-04 09:05:54.795557 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:54.795742 | orchestrator | 2025-02-04 09:05:54.795769 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-02-04 09:05:54.796377 | orchestrator | Tuesday 04 February 2025 09:05:54 +0000 (0:00:00.058) 0:00:26.983 ****** 2025-02-04 09:05:54.866627 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:05:54.867567 | orchestrator | 2025-02-04 09:05:54.868682 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-02-04 09:05:54.869469 | orchestrator | Tuesday 04 February 2025 09:05:54 +0000 (0:00:00.070) 0:00:27.054 ****** 2025-02-04 09:05:55.521441 | orchestrator | changed: [testbed-manager] 2025-02-04 09:05:55.521748 | orchestrator | 2025-02-04 09:05:55.521772 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:05:55.521789 | orchestrator | 2025-02-04 09:05:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:05:55.521813 | orchestrator | 2025-02-04 09:05:55 | INFO  | Please wait and do not abort execution. 2025-02-04 09:05:55.523178 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:05:55.523879 | orchestrator | 2025-02-04 09:05:55.524924 | orchestrator | 2025-02-04 09:05:55.525342 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:05:55.525836 | orchestrator | Tuesday 04 February 2025 09:05:55 +0000 (0:00:00.655) 0:00:27.709 ****** 2025-02-04 09:05:55.526618 | orchestrator | =============================================================================== 2025-02-04 09:05:55.527033 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.47s 2025-02-04 09:05:55.527740 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.24s 2025-02-04 09:05:55.528062 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-02-04 09:05:55.528705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-02-04 09:05:55.529085 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-02-04 09:05:55.529555 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-02-04 09:05:55.529995 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-04 09:05:55.530661 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-04 09:05:55.531214 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-02-04 09:05:55.531533 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-02-04 09:05:55.532144 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-02-04 09:05:55.532690 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-02-04 09:05:55.533100 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-02-04 09:05:55.533471 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-02-04 09:05:55.533854 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-02-04 09:05:55.534187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-02-04 09:05:55.535062 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.66s 2025-02-04 09:05:55.535162 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.21s 2025-02-04 09:05:55.535855 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2025-02-04 09:05:55.536045 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2025-02-04 09:05:55.972152 | orchestrator | ++ semver latest 7.0.0 2025-02-04 09:05:56.025120 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-04 09:05:57.459539 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 09:05:57.459679 | orchestrator | + osism apply nexus 2025-02-04 09:05:57.459733 | orchestrator | 2025-02-04 09:05:57 | INFO  | Task f07b06cd-b621-47b3-8444-b54fb61218ad (nexus) was prepared for execution. 2025-02-04 09:06:00.533176 | orchestrator | 2025-02-04 09:05:57 | INFO  | It takes a moment until task f07b06cd-b621-47b3-8444-b54fb61218ad (nexus) has been started and output is visible here. 2025-02-04 09:06:00.533320 | orchestrator | 2025-02-04 09:06:00.534989 | orchestrator | PLAY [Apply role nexus] ******************************************************** 2025-02-04 09:06:00.536410 | orchestrator | 2025-02-04 09:06:00.536702 | orchestrator | TASK [osism.services.nexus : Include config tasks] ***************************** 2025-02-04 09:06:00.621984 | orchestrator | Tuesday 04 February 2025 09:06:00 +0000 (0:00:00.113) 0:00:00.113 ****** 2025-02-04 09:06:00.622219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/config.yml for testbed-manager 2025-02-04 09:06:00.623834 | orchestrator | 2025-02-04 09:06:00.624825 | orchestrator | TASK [osism.services.nexus : Create required directories] ********************** 2025-02-04 09:06:00.624883 | orchestrator | Tuesday 04 February 2025 09:06:00 +0000 (0:00:00.090) 0:00:00.204 ****** 2025-02-04 09:06:01.449994 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus) 2025-02-04 09:06:01.450287 | orchestrator | changed: [testbed-manager] => (item=/opt/nexus/configuration) 2025-02-04 09:06:01.450322 | orchestrator | 2025-02-04 09:06:01.450901 | orchestrator | TASK [osism.services.nexus : Set UID for nexus_configuration_directory] ******** 2025-02-04 09:06:01.452360 | orchestrator | Tuesday 04 February 2025 09:06:01 +0000 (0:00:00.828) 0:00:01.033 ****** 2025-02-04 09:06:01.848416 | orchestrator | changed: [testbed-manager] 2025-02-04 09:06:01.848694 | orchestrator | 2025-02-04 09:06:01.848719 | orchestrator | TASK [osism.services.nexus : Copy configuration files] ************************* 2025-02-04 09:06:01.848743 | orchestrator | Tuesday 04 February 2025 09:06:01 +0000 (0:00:00.397) 0:00:01.431 ****** 2025-02-04 09:06:03.805476 | orchestrator | changed: [testbed-manager] => (item=nexus.properties) 2025-02-04 09:06:03.806190 | orchestrator | changed: [testbed-manager] => (item=nexus.env) 2025-02-04 09:06:03.806232 | orchestrator | 2025-02-04 09:06:03.806250 | orchestrator | TASK [osism.services.nexus : Include service tasks] **************************** 2025-02-04 09:06:03.806278 | orchestrator | Tuesday 04 February 2025 09:06:03 +0000 (0:00:01.954) 0:00:03.385 ****** 2025-02-04 09:06:03.905202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/service.yml for testbed-manager 2025-02-04 09:06:03.906384 | orchestrator | 2025-02-04 09:06:03.906868 | orchestrator | TASK [osism.services.nexus : Copy nexus systemd unit file] ********************* 2025-02-04 09:06:03.906925 | orchestrator | Tuesday 04 February 2025 09:06:03 +0000 (0:00:00.103) 0:00:03.489 ****** 2025-02-04 09:06:04.765429 | orchestrator | changed: [testbed-manager] 2025-02-04 09:06:04.766548 | orchestrator | 2025-02-04 09:06:04.767012 | orchestrator | TASK [osism.services.nexus : Create traefik external network] ****************** 2025-02-04 09:06:04.767643 | orchestrator | Tuesday 04 February 2025 09:06:04 +0000 (0:00:00.858) 0:00:04.348 ****** 2025-02-04 09:06:05.577632 | orchestrator | ok: [testbed-manager] 2025-02-04 09:06:06.570130 | orchestrator | 2025-02-04 09:06:06.570429 | orchestrator | TASK [osism.services.nexus : Copy docker-compose.yml file] ********************* 2025-02-04 09:06:06.570465 | orchestrator | Tuesday 04 February 2025 09:06:05 +0000 (0:00:00.812) 0:00:05.161 ****** 2025-02-04 09:06:06.570551 | orchestrator | changed: [testbed-manager] 2025-02-04 09:06:06.570866 | orchestrator | 2025-02-04 09:06:06.570907 | orchestrator | TASK [osism.services.nexus : Stop and disable old service docker-compose@nexus] *** 2025-02-04 09:06:06.571888 | orchestrator | Tuesday 04 February 2025 09:06:06 +0000 (0:00:00.990) 0:00:06.151 ****** 2025-02-04 09:06:07.540376 | orchestrator | ok: [testbed-manager] 2025-02-04 09:06:09.004476 | orchestrator | 2025-02-04 09:06:09.004622 | orchestrator | TASK [osism.services.nexus : Manage nexus service] ***************************** 2025-02-04 09:06:09.004644 | orchestrator | Tuesday 04 February 2025 09:06:07 +0000 (0:00:00.970) 0:00:07.121 ****** 2025-02-04 09:06:09.004674 | orchestrator | changed: [testbed-manager] 2025-02-04 09:06:09.005232 | orchestrator | 2025-02-04 09:06:09.006153 | orchestrator | TASK [osism.services.nexus : Register that nexus service was started] ********** 2025-02-04 09:06:09.006188 | orchestrator | Tuesday 04 February 2025 09:06:08 +0000 (0:00:01.463) 0:00:08.584 ****** 2025-02-04 09:06:09.101411 | orchestrator | ok: [testbed-manager] 2025-02-04 09:06:09.102062 | orchestrator | 2025-02-04 09:06:09.102565 | orchestrator | TASK [osism.services.nexus : Flush handlers] *********************************** 2025-02-04 09:06:09.102932 | orchestrator | Tuesday 04 February 2025 09:06:09 +0000 (0:00:00.071) 0:00:08.656 ****** 2025-02-04 09:06:09.103642 | orchestrator | 2025-02-04 09:06:09.104379 | orchestrator | RUNNING HANDLER [osism.services.nexus : Restart nexus service] ***************** 2025-02-04 09:06:09.104627 | orchestrator | Tuesday 04 February 2025 09:06:09 +0000 (0:00:00.028) 0:00:08.684 ****** 2025-02-04 09:06:09.192662 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:06:09.192890 | orchestrator | 2025-02-04 09:06:09.193830 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for nexus service to start] ******* 2025-02-04 09:06:09.194309 | orchestrator | Tuesday 04 February 2025 09:06:09 +0000 (0:00:00.090) 0:00:08.775 ****** 2025-02-04 09:07:09.277702 | orchestrator | Pausing for 60 seconds 2025-02-04 09:07:09.913622 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:09.913748 | orchestrator | 2025-02-04 09:07:09.913770 | orchestrator | RUNNING HANDLER [osism.services.nexus : Ensure that all containers are up] ***** 2025-02-04 09:07:09.913786 | orchestrator | Tuesday 04 February 2025 09:07:09 +0000 (0:01:00.080) 0:01:08.855 ****** 2025-02-04 09:07:09.913817 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:09.915806 | orchestrator | 2025-02-04 09:07:09.915920 | orchestrator | RUNNING HANDLER [osism.services.nexus : Wait for an healthy nexus service] ***** 2025-02-04 09:07:09.915956 | orchestrator | Tuesday 04 February 2025 09:07:09 +0000 (0:00:00.641) 0:01:09.497 ****** 2025-02-04 09:07:30.861835 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy nexus service (50 retries left). 2025-02-04 09:07:30.923663 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:30.923760 | orchestrator | 2025-02-04 09:07:30.923773 | orchestrator | TASK [osism.services.nexus : Include initialize tasks] ************************* 2025-02-04 09:07:30.923784 | orchestrator | Tuesday 04 February 2025 09:07:30 +0000 (0:00:20.939) 0:01:30.437 ****** 2025-02-04 09:07:30.923805 | orchestrator | [WARNING]: Found variable using reserved name: args 2025-02-04 09:07:30.965977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/initialize.yml for testbed-manager 2025-02-04 09:07:30.966328 | orchestrator | 2025-02-04 09:07:30.966571 | orchestrator | TASK [osism.services.nexus : Get setup admin password] ************************* 2025-02-04 09:07:30.966606 | orchestrator | Tuesday 04 February 2025 09:07:30 +0000 (0:00:00.110) 0:01:30.548 ****** 2025-02-04 09:07:32.179020 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:32.259582 | orchestrator | 2025-02-04 09:07:32.259708 | orchestrator | TASK [osism.services.nexus : Set setup admin password] ************************* 2025-02-04 09:07:32.259730 | orchestrator | Tuesday 04 February 2025 09:07:32 +0000 (0:00:01.209) 0:01:31.757 ****** 2025-02-04 09:07:32.259763 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:35.856594 | orchestrator | 2025-02-04 09:07:35.856764 | orchestrator | TASK [osism.services.nexus : Provision scripts included in the container image] *** 2025-02-04 09:07:35.856793 | orchestrator | Tuesday 04 February 2025 09:07:32 +0000 (0:00:00.082) 0:01:31.840 ****** 2025-02-04 09:07:35.856831 | orchestrator | changed: [testbed-manager] => (item=anonymous.json) 2025-02-04 09:07:35.859632 | orchestrator | changed: [testbed-manager] => (item=cleanup.json) 2025-02-04 09:07:35.859708 | orchestrator | 2025-02-04 09:07:35.859723 | orchestrator | TASK [osism.services.nexus : Provision scripts included in this ansible role] *** 2025-02-04 09:07:35.859749 | orchestrator | Tuesday 04 February 2025 09:07:35 +0000 (0:00:03.597) 0:01:35.437 ****** 2025-02-04 09:07:36.091426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=create_repos_from_list) 2025-02-04 09:07:36.092920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_http_proxy) 2025-02-04 09:07:36.092968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=setup_realms) 2025-02-04 09:07:36.094250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/declare-script.yml for testbed-manager => (item=update_admin_password) 2025-02-04 09:07:36.094285 | orchestrator | 2025-02-04 09:07:36.094658 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:36.095005 | orchestrator | Tuesday 04 February 2025 09:07:36 +0000 (0:00:00.235) 0:01:35.672 ****** 2025-02-04 09:07:36.160576 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:36.161034 | orchestrator | 2025-02-04 09:07:36.161070 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:36.161093 | orchestrator | Tuesday 04 February 2025 09:07:36 +0000 (0:00:00.071) 0:01:35.744 ****** 2025-02-04 09:07:36.215663 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:36.215865 | orchestrator | 2025-02-04 09:07:36.215898 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:36.216535 | orchestrator | Tuesday 04 February 2025 09:07:36 +0000 (0:00:00.053) 0:01:35.798 ****** 2025-02-04 09:07:37.141105 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:37.142222 | orchestrator | 2025-02-04 09:07:37.143690 | orchestrator | TASK [osism.services.nexus : Deleting script create_repos_from_list] *********** 2025-02-04 09:07:37.144326 | orchestrator | Tuesday 04 February 2025 09:07:37 +0000 (0:00:00.924) 0:01:36.723 ****** 2025-02-04 09:07:37.827609 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:37.828681 | orchestrator | 2025-02-04 09:07:37.830528 | orchestrator | TASK [osism.services.nexus : Declaring script create_repos_from_list] ********** 2025-02-04 09:07:37.830853 | orchestrator | Tuesday 04 February 2025 09:07:37 +0000 (0:00:00.687) 0:01:37.410 ****** 2025-02-04 09:07:38.512520 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:38.513640 | orchestrator | 2025-02-04 09:07:38.514355 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:38.515139 | orchestrator | Tuesday 04 February 2025 09:07:38 +0000 (0:00:00.684) 0:01:38.094 ****** 2025-02-04 09:07:38.602799 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:38.603004 | orchestrator | 2025-02-04 09:07:38.604765 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:38.605182 | orchestrator | Tuesday 04 February 2025 09:07:38 +0000 (0:00:00.091) 0:01:38.186 ****** 2025-02-04 09:07:38.669042 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:38.669143 | orchestrator | 2025-02-04 09:07:38.669636 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:38.670245 | orchestrator | Tuesday 04 February 2025 09:07:38 +0000 (0:00:00.066) 0:01:38.253 ****** 2025-02-04 09:07:39.302394 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:39.302648 | orchestrator | 2025-02-04 09:07:39.302672 | orchestrator | TASK [osism.services.nexus : Deleting script setup_http_proxy] ***************** 2025-02-04 09:07:39.302694 | orchestrator | Tuesday 04 February 2025 09:07:39 +0000 (0:00:00.629) 0:01:38.883 ****** 2025-02-04 09:07:39.971259 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:39.971674 | orchestrator | 2025-02-04 09:07:39.971710 | orchestrator | TASK [osism.services.nexus : Declaring script setup_http_proxy] **************** 2025-02-04 09:07:39.972588 | orchestrator | Tuesday 04 February 2025 09:07:39 +0000 (0:00:00.671) 0:01:39.554 ****** 2025-02-04 09:07:40.646162 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:40.646534 | orchestrator | 2025-02-04 09:07:40.648180 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:40.648736 | orchestrator | Tuesday 04 February 2025 09:07:40 +0000 (0:00:00.674) 0:01:40.229 ****** 2025-02-04 09:07:40.728416 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:40.728876 | orchestrator | 2025-02-04 09:07:40.728912 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:40.728936 | orchestrator | Tuesday 04 February 2025 09:07:40 +0000 (0:00:00.082) 0:01:40.311 ****** 2025-02-04 09:07:40.793106 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:40.793844 | orchestrator | 2025-02-04 09:07:41.475200 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:41.475329 | orchestrator | Tuesday 04 February 2025 09:07:40 +0000 (0:00:00.063) 0:01:40.374 ****** 2025-02-04 09:07:41.475365 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:41.478535 | orchestrator | 2025-02-04 09:07:41.479282 | orchestrator | TASK [osism.services.nexus : Deleting script setup_realms] ********************* 2025-02-04 09:07:41.481046 | orchestrator | Tuesday 04 February 2025 09:07:41 +0000 (0:00:00.681) 0:01:41.056 ****** 2025-02-04 09:07:42.158302 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:42.160733 | orchestrator | 2025-02-04 09:07:42.160786 | orchestrator | TASK [osism.services.nexus : Declaring script setup_realms] ******************** 2025-02-04 09:07:42.163373 | orchestrator | Tuesday 04 February 2025 09:07:42 +0000 (0:00:00.684) 0:01:41.740 ****** 2025-02-04 09:07:42.857132 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:42.857528 | orchestrator | 2025-02-04 09:07:42.858454 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:42.859167 | orchestrator | Tuesday 04 February 2025 09:07:42 +0000 (0:00:00.699) 0:01:42.440 ****** 2025-02-04 09:07:42.966370 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:42.967810 | orchestrator | 2025-02-04 09:07:42.967959 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:43.047352 | orchestrator | Tuesday 04 February 2025 09:07:42 +0000 (0:00:00.109) 0:01:42.549 ****** 2025-02-04 09:07:43.047511 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:43.047926 | orchestrator | 2025-02-04 09:07:43.048549 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:43.049554 | orchestrator | Tuesday 04 February 2025 09:07:43 +0000 (0:00:00.078) 0:01:42.628 ****** 2025-02-04 09:07:43.753687 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:43.754153 | orchestrator | 2025-02-04 09:07:44.446286 | orchestrator | TASK [osism.services.nexus : Deleting script update_admin_password] ************ 2025-02-04 09:07:44.446405 | orchestrator | Tuesday 04 February 2025 09:07:43 +0000 (0:00:00.707) 0:01:43.335 ****** 2025-02-04 09:07:44.446437 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:44.447118 | orchestrator | 2025-02-04 09:07:44.448684 | orchestrator | TASK [osism.services.nexus : Declaring script update_admin_password] *********** 2025-02-04 09:07:44.450076 | orchestrator | Tuesday 04 February 2025 09:07:44 +0000 (0:00:00.692) 0:01:44.028 ****** 2025-02-04 09:07:45.179270 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:45.182338 | orchestrator | 2025-02-04 09:07:45.277673 | orchestrator | TASK [osism.services.nexus : Set admin password] ******************************* 2025-02-04 09:07:45.277770 | orchestrator | Tuesday 04 February 2025 09:07:45 +0000 (0:00:00.727) 0:01:44.755 ****** 2025-02-04 09:07:45.277794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-04 09:07:45.278244 | orchestrator | 2025-02-04 09:07:45.278364 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:45.278398 | orchestrator | Tuesday 04 February 2025 09:07:45 +0000 (0:00:00.104) 0:01:44.859 ****** 2025-02-04 09:07:45.367451 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:45.383500 | orchestrator | 2025-02-04 09:07:45.446649 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:45.446764 | orchestrator | Tuesday 04 February 2025 09:07:45 +0000 (0:00:00.089) 0:01:44.949 ****** 2025-02-04 09:07:45.446794 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:45.456969 | orchestrator | 2025-02-04 09:07:46.088976 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:46.089095 | orchestrator | Tuesday 04 February 2025 09:07:45 +0000 (0:00:00.082) 0:01:45.031 ****** 2025-02-04 09:07:46.089132 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:46.089456 | orchestrator | 2025-02-04 09:07:46.089721 | orchestrator | TASK [osism.services.nexus : Calling script update_admin_password] ************* 2025-02-04 09:07:46.091194 | orchestrator | Tuesday 04 February 2025 09:07:46 +0000 (0:00:00.639) 0:01:45.671 ****** 2025-02-04 09:07:47.895974 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:47.896184 | orchestrator | 2025-02-04 09:07:47.896220 | orchestrator | TASK [osism.services.nexus : Set new admin password] *************************** 2025-02-04 09:07:47.896257 | orchestrator | Tuesday 04 February 2025 09:07:47 +0000 (0:00:01.808) 0:01:47.479 ****** 2025-02-04 09:07:47.952429 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:47.952662 | orchestrator | 2025-02-04 09:07:47.953392 | orchestrator | TASK [osism.services.nexus : Allow anonymous access] *************************** 2025-02-04 09:07:47.954093 | orchestrator | Tuesday 04 February 2025 09:07:47 +0000 (0:00:00.058) 0:01:47.537 ****** 2025-02-04 09:07:49.797192 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:51.635989 | orchestrator | 2025-02-04 09:07:51.636126 | orchestrator | TASK [osism.services.nexus : Cleanup default repositories] ********************* 2025-02-04 09:07:51.636149 | orchestrator | Tuesday 04 February 2025 09:07:49 +0000 (0:00:01.840) 0:01:49.377 ****** 2025-02-04 09:07:51.636181 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:51.637093 | orchestrator | 2025-02-04 09:07:51.637120 | orchestrator | TASK [osism.services.nexus : Setup http proxy] ********************************* 2025-02-04 09:07:51.637142 | orchestrator | Tuesday 04 February 2025 09:07:51 +0000 (0:00:01.835) 0:01:51.213 ****** 2025-02-04 09:07:51.734279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-04 09:07:51.734649 | orchestrator | 2025-02-04 09:07:51.735294 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:51.738116 | orchestrator | Tuesday 04 February 2025 09:07:51 +0000 (0:00:00.105) 0:01:51.319 ****** 2025-02-04 09:07:51.815322 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:51.815857 | orchestrator | 2025-02-04 09:07:51.816353 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:51.817559 | orchestrator | Tuesday 04 February 2025 09:07:51 +0000 (0:00:00.081) 0:01:51.400 ****** 2025-02-04 09:07:51.897521 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:51.898573 | orchestrator | 2025-02-04 09:07:51.899038 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:51.900070 | orchestrator | Tuesday 04 February 2025 09:07:51 +0000 (0:00:00.081) 0:01:51.481 ****** 2025-02-04 09:07:52.530590 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:52.531708 | orchestrator | 2025-02-04 09:07:52.532297 | orchestrator | TASK [osism.services.nexus : Calling script setup_http_proxy] ****************** 2025-02-04 09:07:52.532928 | orchestrator | Tuesday 04 February 2025 09:07:52 +0000 (0:00:00.632) 0:01:52.113 ****** 2025-02-04 09:07:53.467906 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:53.468517 | orchestrator | 2025-02-04 09:07:53.641722 | orchestrator | TASK [osism.services.nexus : Setup realms] ************************************* 2025-02-04 09:07:53.641846 | orchestrator | Tuesday 04 February 2025 09:07:53 +0000 (0:00:00.934) 0:01:53.048 ****** 2025-02-04 09:07:53.641884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-04 09:07:53.710271 | orchestrator | 2025-02-04 09:07:53.710384 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:53.710403 | orchestrator | Tuesday 04 February 2025 09:07:53 +0000 (0:00:00.178) 0:01:53.226 ****** 2025-02-04 09:07:53.710434 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:53.712243 | orchestrator | 2025-02-04 09:07:53.770983 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:53.771094 | orchestrator | Tuesday 04 February 2025 09:07:53 +0000 (0:00:00.068) 0:01:53.295 ****** 2025-02-04 09:07:53.771128 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:53.771202 | orchestrator | 2025-02-04 09:07:53.771362 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:53.771710 | orchestrator | Tuesday 04 February 2025 09:07:53 +0000 (0:00:00.060) 0:01:53.356 ****** 2025-02-04 09:07:54.341984 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:54.342326 | orchestrator | 2025-02-04 09:07:54.342728 | orchestrator | TASK [osism.services.nexus : Calling script setup_realms] ********************** 2025-02-04 09:07:54.343023 | orchestrator | Tuesday 04 February 2025 09:07:54 +0000 (0:00:00.568) 0:01:53.924 ****** 2025-02-04 09:07:55.329694 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:55.381970 | orchestrator | 2025-02-04 09:07:55.382331 | orchestrator | TASK [osism.services.nexus : Apply defaults to docker proxy repos] ************* 2025-02-04 09:07:55.382678 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.985) 0:01:54.909 ****** 2025-02-04 09:07:55.382811 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:55.383224 | orchestrator | 2025-02-04 09:07:55.383252 | orchestrator | TASK [osism.services.nexus : Add docker repositories to global repos list] ***** 2025-02-04 09:07:55.383272 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.057) 0:01:54.966 ****** 2025-02-04 09:07:55.448175 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:55.448777 | orchestrator | 2025-02-04 09:07:55.448822 | orchestrator | TASK [osism.services.nexus : Apply defaults to apt proxy repos] **************** 2025-02-04 09:07:55.449900 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.066) 0:01:55.032 ****** 2025-02-04 09:07:55.504950 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:55.505527 | orchestrator | 2025-02-04 09:07:55.505810 | orchestrator | TASK [osism.services.nexus : Add apt repositories to global repos list] ******** 2025-02-04 09:07:55.506361 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.056) 0:01:55.089 ****** 2025-02-04 09:07:55.592312 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:55.592603 | orchestrator | 2025-02-04 09:07:55.593826 | orchestrator | TASK [osism.services.nexus : Create configured repositories] ******************* 2025-02-04 09:07:55.594881 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.086) 0:01:55.176 ****** 2025-02-04 09:07:55.682800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/nexus/tasks/call-script.yml for testbed-manager 2025-02-04 09:07:55.683067 | orchestrator | 2025-02-04 09:07:55.683893 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:55.684508 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.090) 0:01:55.267 ****** 2025-02-04 09:07:55.748536 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:55.749009 | orchestrator | 2025-02-04 09:07:55.750676 | orchestrator | TASK [osism.services.nexus : Set nexus url] ************************************ 2025-02-04 09:07:55.750934 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.066) 0:01:55.333 ****** 2025-02-04 09:07:55.805317 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:07:55.806694 | orchestrator | 2025-02-04 09:07:55.807432 | orchestrator | TASK [osism.services.nexus : Wait for nexus] *********************************** 2025-02-04 09:07:55.808149 | orchestrator | Tuesday 04 February 2025 09:07:55 +0000 (0:00:00.055) 0:01:55.389 ****** 2025-02-04 09:07:56.388777 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:56.390818 | orchestrator | 2025-02-04 09:07:58.970460 | orchestrator | TASK [osism.services.nexus : Calling script create_repos_from_list] ************ 2025-02-04 09:07:58.970640 | orchestrator | Tuesday 04 February 2025 09:07:56 +0000 (0:00:00.583) 0:01:55.972 ****** 2025-02-04 09:07:58.970677 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:59.060773 | orchestrator | 2025-02-04 09:07:59.060889 | orchestrator | TASK [Set osism.nexus.status fact] ********************************************* 2025-02-04 09:07:59.060922 | orchestrator | Tuesday 04 February 2025 09:07:58 +0000 (0:00:02.577) 0:01:58.550 ****** 2025-02-04 09:07:59.060949 | orchestrator | included: osism.commons.state for testbed-manager 2025-02-04 09:07:59.061144 | orchestrator | 2025-02-04 09:07:59.061161 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-04 09:07:59.061176 | orchestrator | Tuesday 04 February 2025 09:07:59 +0000 (0:00:00.094) 0:01:58.645 ****** 2025-02-04 09:07:59.441976 | orchestrator | ok: [testbed-manager] 2025-02-04 09:07:59.442275 | orchestrator | 2025-02-04 09:07:59.442315 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-04 09:07:59.950358 | orchestrator | Tuesday 04 February 2025 09:07:59 +0000 (0:00:00.379) 0:01:59.025 ****** 2025-02-04 09:07:59.950557 | orchestrator | changed: [testbed-manager] 2025-02-04 09:07:59.951044 | orchestrator | 2025-02-04 09:07:59.952033 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:07:59.952312 | orchestrator | 2025-02-04 09:07:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:07:59.952602 | orchestrator | 2025-02-04 09:07:59 | INFO  | Please wait and do not abort execution. 2025-02-04 09:07:59.954096 | orchestrator | testbed-manager : ok=64  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-02-04 09:07:59.955061 | orchestrator | 2025-02-04 09:07:59.955734 | orchestrator | 2025-02-04 09:07:59.956655 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:07:59.957244 | orchestrator | Tuesday 04 February 2025 09:07:59 +0000 (0:00:00.508) 0:01:59.533 ****** 2025-02-04 09:07:59.957521 | orchestrator | =============================================================================== 2025-02-04 09:07:59.957843 | orchestrator | osism.services.nexus : Wait for nexus service to start ----------------- 60.08s 2025-02-04 09:07:59.958114 | orchestrator | osism.services.nexus : Wait for an healthy nexus service --------------- 20.94s 2025-02-04 09:07:59.961146 | orchestrator | osism.services.nexus : Provision scripts included in the container image --- 3.60s 2025-02-04 09:07:59.961309 | orchestrator | osism.services.nexus : Calling script create_repos_from_list ------------ 2.58s 2025-02-04 09:07:59.961345 | orchestrator | osism.services.nexus : Copy configuration files ------------------------- 1.95s 2025-02-04 09:07:59.961369 | orchestrator | osism.services.nexus : Allow anonymous access --------------------------- 1.84s 2025-02-04 09:07:59.961392 | orchestrator | osism.services.nexus : Cleanup default repositories --------------------- 1.84s 2025-02-04 09:07:59.961416 | orchestrator | osism.services.nexus : Calling script update_admin_password ------------- 1.81s 2025-02-04 09:07:59.961440 | orchestrator | osism.services.nexus : Manage nexus service ----------------------------- 1.46s 2025-02-04 09:07:59.961464 | orchestrator | osism.services.nexus : Get setup admin password ------------------------- 1.21s 2025-02-04 09:07:59.961519 | orchestrator | osism.services.nexus : Copy docker-compose.yml file --------------------- 0.99s 2025-02-04 09:07:59.961596 | orchestrator | osism.services.nexus : Calling script setup_realms ---------------------- 0.99s 2025-02-04 09:07:59.961668 | orchestrator | osism.services.nexus : Stop and disable old service docker-compose@nexus --- 0.97s 2025-02-04 09:07:59.962124 | orchestrator | osism.services.nexus : Calling script setup_http_proxy ------------------ 0.94s 2025-02-04 09:07:59.962606 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.92s 2025-02-04 09:07:59.962991 | orchestrator | osism.services.nexus : Copy nexus systemd unit file --------------------- 0.86s 2025-02-04 09:07:59.963029 | orchestrator | osism.services.nexus : Create required directories ---------------------- 0.83s 2025-02-04 09:07:59.963253 | orchestrator | osism.services.nexus : Create traefik external network ------------------ 0.81s 2025-02-04 09:07:59.963556 | orchestrator | osism.services.nexus : Declaring script update_admin_password ----------- 0.73s 2025-02-04 09:07:59.963899 | orchestrator | osism.services.nexus : Wait for nexus ----------------------------------- 0.71s 2025-02-04 09:08:00.264414 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-04 09:08:00.272229 | orchestrator | + sh -c '/opt/configuration/scripts/set-docker-registry.sh nexus.testbed.osism.xyz:8193' 2025-02-04 09:08:00.272358 | orchestrator | + set -e 2025-02-04 09:08:00.272436 | orchestrator | + source /opt/manager-vars.sh 2025-02-04 09:08:00.272457 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-04 09:08:00.272501 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-04 09:08:00.272515 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-04 09:08:00.272530 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-04 09:08:00.272544 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-04 09:08:00.272560 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-04 09:08:00.272574 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-04 09:08:00.272589 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-04 09:08:00.272603 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-04 09:08:00.272618 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-04 09:08:00.272632 | orchestrator | ++ export ARA=false 2025-02-04 09:08:00.272646 | orchestrator | ++ ARA=false 2025-02-04 09:08:00.272660 | orchestrator | ++ export TEMPEST=false 2025-02-04 09:08:00.272675 | orchestrator | ++ TEMPEST=false 2025-02-04 09:08:00.272688 | orchestrator | ++ export IS_ZUUL=true 2025-02-04 09:08:00.272702 | orchestrator | ++ IS_ZUUL=true 2025-02-04 09:08:00.272716 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 09:08:00.272731 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 09:08:00.272772 | orchestrator | ++ export EXTERNAL_API=false 2025-02-04 09:08:00.272788 | orchestrator | ++ EXTERNAL_API=false 2025-02-04 09:08:00.272801 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-04 09:08:00.272815 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-04 09:08:00.272830 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-04 09:08:00.272844 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-04 09:08:00.272858 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-04 09:08:00.272872 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-04 09:08:00.272890 | orchestrator | + DOCKER_REGISTRY=nexus.testbed.osism.xyz:8193 2025-02-04 09:08:00.278134 | orchestrator | + sed -i 's#ceph_docker_registry: .*#ceph_docker_registry: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-04 09:08:00.278258 | orchestrator | + sed -i 's#docker_registry_ansible: .*#docker_registry_ansible: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-04 09:08:00.283428 | orchestrator | + sed -i 's#docker_registry_kolla: .*#docker_registry_kolla: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-04 09:08:00.288526 | orchestrator | + sed -i 's#docker_registry_netbox: .*#docker_registry_netbox: nexus.testbed.osism.xyz:8193#g' /opt/configuration/inventory/group_vars/all/registries.yml 2025-02-04 09:08:00.293298 | orchestrator | + [[ nexus.testbed.osism.xyz:8193 == \o\s\i\s\m\.\h\a\r\b\o\r\.\r\e\g\i\o\.\d\i\g\i\t\a\l ]] 2025-02-04 09:08:00.293862 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 09:08:00.298841 | orchestrator | + sed -i 's/docker_namespace: osism/docker_namespace: kolla/' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-02-04 09:08:00.298892 | orchestrator | + osism apply squid 2025-02-04 09:08:01.713490 | orchestrator | 2025-02-04 09:08:01 | INFO  | Task 80d7707f-31eb-4032-b96d-afff35f98c34 (squid) was prepared for execution. 2025-02-04 09:08:04.541355 | orchestrator | 2025-02-04 09:08:01 | INFO  | It takes a moment until task 80d7707f-31eb-4032-b96d-afff35f98c34 (squid) has been started and output is visible here. 2025-02-04 09:08:04.541590 | orchestrator | 2025-02-04 09:08:04.544723 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-02-04 09:08:04.544796 | orchestrator | 2025-02-04 09:08:04.545019 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-02-04 09:08:04.545047 | orchestrator | Tuesday 04 February 2025 09:08:04 +0000 (0:00:00.117) 0:00:00.117 ****** 2025-02-04 09:08:04.626125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-02-04 09:08:04.626707 | orchestrator | 2025-02-04 09:08:04.629974 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-02-04 09:08:04.630233 | orchestrator | Tuesday 04 February 2025 09:08:04 +0000 (0:00:00.085) 0:00:00.202 ****** 2025-02-04 09:08:05.815429 | orchestrator | ok: [testbed-manager] 2025-02-04 09:08:05.815801 | orchestrator | 2025-02-04 09:08:05.815832 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-02-04 09:08:05.816063 | orchestrator | Tuesday 04 February 2025 09:08:05 +0000 (0:00:01.187) 0:00:01.390 ****** 2025-02-04 09:08:07.003352 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-02-04 09:08:07.003604 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-02-04 09:08:07.005925 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-02-04 09:08:07.007456 | orchestrator | 2025-02-04 09:08:07.007775 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-02-04 09:08:07.008363 | orchestrator | Tuesday 04 February 2025 09:08:06 +0000 (0:00:01.187) 0:00:02.578 ****** 2025-02-04 09:08:08.158588 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-02-04 09:08:08.159130 | orchestrator | 2025-02-04 09:08:08.159173 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-02-04 09:08:08.160706 | orchestrator | Tuesday 04 February 2025 09:08:08 +0000 (0:00:01.154) 0:00:03.733 ****** 2025-02-04 09:08:08.528768 | orchestrator | ok: [testbed-manager] 2025-02-04 09:08:09.477629 | orchestrator | 2025-02-04 09:08:09.477771 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-02-04 09:08:09.477792 | orchestrator | Tuesday 04 February 2025 09:08:08 +0000 (0:00:00.367) 0:00:04.100 ****** 2025-02-04 09:08:09.477837 | orchestrator | changed: [testbed-manager] 2025-02-04 09:08:09.478013 | orchestrator | 2025-02-04 09:08:09.478081 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-02-04 09:08:09.478101 | orchestrator | Tuesday 04 February 2025 09:08:09 +0000 (0:00:00.950) 0:00:05.050 ****** 2025-02-04 09:08:39.410835 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-02-04 09:08:39.411003 | orchestrator | ok: [testbed-manager] 2025-02-04 09:08:39.411026 | orchestrator | 2025-02-04 09:08:39.411041 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-02-04 09:08:39.411059 | orchestrator | Tuesday 04 February 2025 09:08:39 +0000 (0:00:29.929) 0:00:34.980 ****** 2025-02-04 09:08:51.850064 | orchestrator | changed: [testbed-manager] 2025-02-04 09:09:51.940099 | orchestrator | 2025-02-04 09:09:51.940242 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-02-04 09:09:51.940265 | orchestrator | Tuesday 04 February 2025 09:08:51 +0000 (0:00:12.441) 0:00:47.422 ****** 2025-02-04 09:09:51.940296 | orchestrator | Pausing for 60 seconds 2025-02-04 09:09:51.941682 | orchestrator | changed: [testbed-manager] 2025-02-04 09:09:51.941736 | orchestrator | 2025-02-04 09:09:51.944025 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-02-04 09:09:51.944840 | orchestrator | Tuesday 04 February 2025 09:09:51 +0000 (0:01:00.091) 0:01:47.513 ****** 2025-02-04 09:09:52.004575 | orchestrator | ok: [testbed-manager] 2025-02-04 09:09:52.005570 | orchestrator | 2025-02-04 09:09:52.005637 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-02-04 09:09:52.006059 | orchestrator | Tuesday 04 February 2025 09:09:51 +0000 (0:00:00.066) 0:01:47.579 ****** 2025-02-04 09:09:52.661109 | orchestrator | changed: [testbed-manager] 2025-02-04 09:09:52.661434 | orchestrator | 2025-02-04 09:09:52.661502 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:09:52.661528 | orchestrator | 2025-02-04 09:09:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:09:52.661626 | orchestrator | 2025-02-04 09:09:52 | INFO  | Please wait and do not abort execution. 2025-02-04 09:09:52.662640 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:09:52.663123 | orchestrator | 2025-02-04 09:09:52.664096 | orchestrator | 2025-02-04 09:09:52.664542 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:09:52.665124 | orchestrator | Tuesday 04 February 2025 09:09:52 +0000 (0:00:00.658) 0:01:48.238 ****** 2025-02-04 09:09:52.666205 | orchestrator | =============================================================================== 2025-02-04 09:09:52.666349 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-02-04 09:09:52.666629 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 29.93s 2025-02-04 09:09:52.666840 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.44s 2025-02-04 09:09:52.667168 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2025-02-04 09:09:52.667509 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.19s 2025-02-04 09:09:52.667638 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.15s 2025-02-04 09:09:52.667882 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-02-04 09:09:52.668243 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-02-04 09:09:52.668373 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-02-04 09:09:52.668723 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-02-04 09:09:52.668976 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-02-04 09:09:53.096558 | orchestrator | + rm -f /opt/configuration/environments/kolla/files/overlays/horizon/_9999-custom-settings.py 2025-02-04 09:09:53.102894 | orchestrator | + rm -f /opt/configuration/environments/kolla/files/overlays/horizon/custom_local_settings 2025-02-04 09:09:53.106607 | orchestrator | + rm -f /opt/configuration/environments/kolla/files/overlays/keystone/wsgi-keystone.conf 2025-02-04 09:09:53.110195 | orchestrator | + rm -f /opt/configuration/environments/kolla/group_vars/keystone.yml 2025-02-04 09:09:53.114770 | orchestrator | + rm -rf /opt/configuration/environments/kolla/files/overlays/keystone/federation 2025-02-04 09:09:53.120917 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-02-04 09:09:54.606900 | orchestrator | 2025-02-04 09:09:54 | INFO  | Task e043c247-6235-4c92-9d54-4e0ab7e9d30e (operator) was prepared for execution. 2025-02-04 09:09:57.716106 | orchestrator | 2025-02-04 09:09:54 | INFO  | It takes a moment until task e043c247-6235-4c92-9d54-4e0ab7e9d30e (operator) has been started and output is visible here. 2025-02-04 09:09:57.716212 | orchestrator | 2025-02-04 09:09:57.716293 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-02-04 09:09:57.716799 | orchestrator | 2025-02-04 09:09:57.719219 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-02-04 09:09:57.719360 | orchestrator | Tuesday 04 February 2025 09:09:57 +0000 (0:00:00.099) 0:00:00.099 ****** 2025-02-04 09:10:01.414091 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:01.414373 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:01.414407 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:01.414424 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:01.414440 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:01.414495 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:01.414962 | orchestrator | 2025-02-04 09:10:01.415973 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-02-04 09:10:02.175666 | orchestrator | Tuesday 04 February 2025 09:10:01 +0000 (0:00:03.698) 0:00:03.798 ****** 2025-02-04 09:10:02.175826 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:02.179438 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:02.180666 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:02.180703 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:02.180723 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:02.181590 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:02.182135 | orchestrator | 2025-02-04 09:10:02.183293 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-02-04 09:10:02.184219 | orchestrator | 2025-02-04 09:10:02.184253 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-02-04 09:10:02.184950 | orchestrator | Tuesday 04 February 2025 09:10:02 +0000 (0:00:00.763) 0:00:04.562 ****** 2025-02-04 09:10:02.244677 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:02.266156 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:02.293349 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:02.357509 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:02.358093 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:02.359370 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:02.360289 | orchestrator | 2025-02-04 09:10:02.361204 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-02-04 09:10:02.361642 | orchestrator | Tuesday 04 February 2025 09:10:02 +0000 (0:00:00.180) 0:00:04.743 ****** 2025-02-04 09:10:02.428144 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:02.484903 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:02.534865 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:02.535699 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:02.535970 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:02.537409 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:02.537918 | orchestrator | 2025-02-04 09:10:02.539293 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-02-04 09:10:02.539943 | orchestrator | Tuesday 04 February 2025 09:10:02 +0000 (0:00:00.178) 0:00:04.921 ****** 2025-02-04 09:10:03.170870 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:03.171180 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:03.173720 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:03.174253 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:03.174318 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:03.175668 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:03.175904 | orchestrator | 2025-02-04 09:10:03.176544 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-02-04 09:10:03.177054 | orchestrator | Tuesday 04 February 2025 09:10:03 +0000 (0:00:00.636) 0:00:05.557 ****** 2025-02-04 09:10:03.971097 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:03.972542 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:03.972975 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:03.974179 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:03.974598 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:03.975180 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:03.975692 | orchestrator | 2025-02-04 09:10:03.976411 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-02-04 09:10:03.976756 | orchestrator | Tuesday 04 February 2025 09:10:03 +0000 (0:00:00.798) 0:00:06.356 ****** 2025-02-04 09:10:05.158190 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-02-04 09:10:05.159096 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-02-04 09:10:05.159177 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-02-04 09:10:05.161017 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-02-04 09:10:05.161949 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-02-04 09:10:05.162006 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-02-04 09:10:05.163190 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-02-04 09:10:05.164334 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-02-04 09:10:05.164766 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-02-04 09:10:05.166483 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-02-04 09:10:05.166890 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-02-04 09:10:05.167875 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-02-04 09:10:05.168144 | orchestrator | 2025-02-04 09:10:05.169274 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-02-04 09:10:05.170293 | orchestrator | Tuesday 04 February 2025 09:10:05 +0000 (0:00:01.186) 0:00:07.543 ****** 2025-02-04 09:10:06.386261 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:06.387720 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:06.387850 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:06.387866 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:06.387875 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:06.387888 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:06.388038 | orchestrator | 2025-02-04 09:10:06.388895 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-02-04 09:10:06.389045 | orchestrator | Tuesday 04 February 2025 09:10:06 +0000 (0:00:01.227) 0:00:08.771 ****** 2025-02-04 09:10:07.529743 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-02-04 09:10:07.530252 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-02-04 09:10:07.530648 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-02-04 09:10:07.662568 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 09:10:07.664040 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 09:10:07.664678 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 09:10:07.667071 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 09:10:07.668714 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 09:10:07.670261 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-02-04 09:10:07.670754 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-02-04 09:10:07.671691 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-02-04 09:10:07.672753 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-02-04 09:10:07.673438 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-02-04 09:10:07.674065 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-02-04 09:10:07.674745 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-02-04 09:10:07.676149 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-02-04 09:10:07.677155 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-02-04 09:10:07.677778 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-02-04 09:10:07.678870 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-02-04 09:10:07.679525 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-02-04 09:10:07.680018 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-02-04 09:10:07.680324 | orchestrator | 2025-02-04 09:10:07.681195 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-02-04 09:10:07.681573 | orchestrator | Tuesday 04 February 2025 09:10:07 +0000 (0:00:01.278) 0:00:10.049 ****** 2025-02-04 09:10:08.225174 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:08.225397 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:08.225416 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:08.225426 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:08.226172 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:08.226463 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:08.227233 | orchestrator | 2025-02-04 09:10:08.227416 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-02-04 09:10:08.228114 | orchestrator | Tuesday 04 February 2025 09:10:08 +0000 (0:00:00.559) 0:00:10.609 ****** 2025-02-04 09:10:08.318840 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:10:08.356884 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:10:08.379177 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:10:08.444517 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:08.444660 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:08.445735 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:08.446259 | orchestrator | 2025-02-04 09:10:08.447741 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-02-04 09:10:08.448724 | orchestrator | Tuesday 04 February 2025 09:10:08 +0000 (0:00:00.220) 0:00:10.830 ****** 2025-02-04 09:10:09.173789 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-04 09:10:09.174082 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:09.174122 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-04 09:10:09.174146 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:09.174818 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-04 09:10:09.174850 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-04 09:10:09.175680 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:09.176132 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:09.176163 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-04 09:10:09.179173 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:09.234910 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-04 09:10:09.235032 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:09.235052 | orchestrator | 2025-02-04 09:10:09.235069 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-02-04 09:10:09.235085 | orchestrator | Tuesday 04 February 2025 09:10:09 +0000 (0:00:00.727) 0:00:11.557 ****** 2025-02-04 09:10:09.235116 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:10:09.257702 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:10:09.282397 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:10:09.340801 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:09.341494 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:09.342845 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:09.344019 | orchestrator | 2025-02-04 09:10:09.345122 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-02-04 09:10:09.346543 | orchestrator | Tuesday 04 February 2025 09:10:09 +0000 (0:00:00.169) 0:00:11.727 ****** 2025-02-04 09:10:09.393244 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:10:09.418168 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:10:09.436963 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:10:09.509315 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:09.510619 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:09.511553 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:09.512637 | orchestrator | 2025-02-04 09:10:09.513796 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-02-04 09:10:09.514630 | orchestrator | Tuesday 04 February 2025 09:10:09 +0000 (0:00:00.166) 0:00:11.893 ****** 2025-02-04 09:10:09.551619 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:10:09.581621 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:10:09.605240 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:10:09.629875 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:09.660981 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:09.661206 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:09.661852 | orchestrator | 2025-02-04 09:10:09.663583 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-02-04 09:10:10.340678 | orchestrator | Tuesday 04 February 2025 09:10:09 +0000 (0:00:00.154) 0:00:12.048 ****** 2025-02-04 09:10:10.340850 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:10.341183 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:10.342383 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:10.342826 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:10.343559 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:10.344534 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:10.345031 | orchestrator | 2025-02-04 09:10:10.345683 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-02-04 09:10:10.346197 | orchestrator | Tuesday 04 February 2025 09:10:10 +0000 (0:00:00.678) 0:00:12.726 ****** 2025-02-04 09:10:10.438365 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:10:10.457427 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:10:10.563352 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:10:10.564675 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:10.564713 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:10.565577 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:10.565747 | orchestrator | 2025-02-04 09:10:10.568112 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:10:10.568905 | orchestrator | 2025-02-04 09:10:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:10:10.569175 | orchestrator | 2025-02-04 09:10:10 | INFO  | Please wait and do not abort execution. 2025-02-04 09:10:10.570157 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:10:10.570575 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:10:10.571035 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:10:10.571373 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:10:10.571855 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:10:10.572393 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:10:10.573055 | orchestrator | 2025-02-04 09:10:10.573414 | orchestrator | 2025-02-04 09:10:10.573779 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:10:10.574194 | orchestrator | Tuesday 04 February 2025 09:10:10 +0000 (0:00:00.223) 0:00:12.950 ****** 2025-02-04 09:10:10.574753 | orchestrator | =============================================================================== 2025-02-04 09:10:10.575151 | orchestrator | Gathering Facts --------------------------------------------------------- 3.70s 2025-02-04 09:10:10.575563 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-02-04 09:10:10.576115 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.23s 2025-02-04 09:10:10.576426 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-02-04 09:10:10.576870 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-02-04 09:10:10.577244 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-02-04 09:10:10.577720 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-02-04 09:10:10.578123 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2025-02-04 09:10:10.578471 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.64s 2025-02-04 09:10:10.578843 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-02-04 09:10:10.579928 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-02-04 09:10:10.580098 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2025-02-04 09:10:10.580324 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-02-04 09:10:10.580736 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-02-04 09:10:10.581066 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2025-02-04 09:10:10.581486 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-02-04 09:10:10.581752 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-02-04 09:10:11.027752 | orchestrator | + osism apply --environment custom facts 2025-02-04 09:10:12.410009 | orchestrator | 2025-02-04 09:10:12 | INFO  | Trying to run play facts in environment custom 2025-02-04 09:10:12.455827 | orchestrator | 2025-02-04 09:10:12 | INFO  | Task 20a3b70a-958f-42d8-ae7a-02595a0ae400 (facts) was prepared for execution. 2025-02-04 09:10:15.604035 | orchestrator | 2025-02-04 09:10:12 | INFO  | It takes a moment until task 20a3b70a-958f-42d8-ae7a-02595a0ae400 (facts) has been started and output is visible here. 2025-02-04 09:10:15.604187 | orchestrator | 2025-02-04 09:10:15.604905 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-02-04 09:10:15.605499 | orchestrator | 2025-02-04 09:10:15.607014 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-04 09:10:15.609386 | orchestrator | Tuesday 04 February 2025 09:10:15 +0000 (0:00:00.090) 0:00:00.090 ****** 2025-02-04 09:10:17.083670 | orchestrator | ok: [testbed-manager] 2025-02-04 09:10:17.084511 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:17.086669 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:17.086773 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:17.086796 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:17.087140 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:17.087770 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:17.088395 | orchestrator | 2025-02-04 09:10:17.088857 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-02-04 09:10:17.089346 | orchestrator | Tuesday 04 February 2025 09:10:17 +0000 (0:00:01.479) 0:00:01.569 ****** 2025-02-04 09:10:18.315126 | orchestrator | ok: [testbed-manager] 2025-02-04 09:10:18.315579 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:10:18.316031 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:10:18.316504 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:10:18.316789 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:18.317310 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:18.317944 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:18.318328 | orchestrator | 2025-02-04 09:10:18.318913 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-02-04 09:10:18.319519 | orchestrator | 2025-02-04 09:10:18.319926 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-04 09:10:18.320412 | orchestrator | Tuesday 04 February 2025 09:10:18 +0000 (0:00:01.231) 0:00:02.801 ****** 2025-02-04 09:10:18.432020 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:18.432562 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:18.434111 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:18.434512 | orchestrator | 2025-02-04 09:10:18.435350 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-04 09:10:18.436080 | orchestrator | Tuesday 04 February 2025 09:10:18 +0000 (0:00:00.118) 0:00:02.919 ****** 2025-02-04 09:10:18.593546 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:18.594759 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:18.595203 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:18.595855 | orchestrator | 2025-02-04 09:10:18.596523 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-04 09:10:18.596925 | orchestrator | Tuesday 04 February 2025 09:10:18 +0000 (0:00:00.160) 0:00:03.080 ****** 2025-02-04 09:10:18.732573 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:18.733061 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:18.733866 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:18.733925 | orchestrator | 2025-02-04 09:10:18.737412 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-04 09:10:18.857802 | orchestrator | Tuesday 04 February 2025 09:10:18 +0000 (0:00:00.140) 0:00:03.221 ****** 2025-02-04 09:10:18.857929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:10:18.858398 | orchestrator | 2025-02-04 09:10:18.863070 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-04 09:10:18.863827 | orchestrator | Tuesday 04 February 2025 09:10:18 +0000 (0:00:00.125) 0:00:03.346 ****** 2025-02-04 09:10:19.367050 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:19.367227 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:19.367651 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:19.367698 | orchestrator | 2025-02-04 09:10:19.367875 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-04 09:10:19.368296 | orchestrator | Tuesday 04 February 2025 09:10:19 +0000 (0:00:00.508) 0:00:03.855 ****** 2025-02-04 09:10:19.475706 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:19.476996 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:19.477969 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:19.478693 | orchestrator | 2025-02-04 09:10:19.479574 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-04 09:10:19.480279 | orchestrator | Tuesday 04 February 2025 09:10:19 +0000 (0:00:00.108) 0:00:03.963 ****** 2025-02-04 09:10:20.463280 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:20.463435 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:20.463837 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:20.464596 | orchestrator | 2025-02-04 09:10:20.466154 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-04 09:10:20.466707 | orchestrator | Tuesday 04 February 2025 09:10:20 +0000 (0:00:00.986) 0:00:04.950 ****** 2025-02-04 09:10:20.918136 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:20.918394 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:20.919003 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:20.919584 | orchestrator | 2025-02-04 09:10:20.920240 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-04 09:10:20.920749 | orchestrator | Tuesday 04 February 2025 09:10:20 +0000 (0:00:00.453) 0:00:05.403 ****** 2025-02-04 09:10:22.023966 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:22.024114 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:22.025155 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:22.026224 | orchestrator | 2025-02-04 09:10:22.027286 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-04 09:10:22.028482 | orchestrator | Tuesday 04 February 2025 09:10:22 +0000 (0:00:01.104) 0:00:06.508 ****** 2025-02-04 09:10:34.787618 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:34.788409 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:34.788510 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:34.788535 | orchestrator | 2025-02-04 09:10:34.788606 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-02-04 09:10:34.789419 | orchestrator | Tuesday 04 February 2025 09:10:34 +0000 (0:00:12.762) 0:00:19.271 ****** 2025-02-04 09:10:34.865605 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:34.865948 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:34.867247 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:34.867630 | orchestrator | 2025-02-04 09:10:34.868924 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-02-04 09:10:34.869750 | orchestrator | Tuesday 04 February 2025 09:10:34 +0000 (0:00:00.082) 0:00:19.353 ****** 2025-02-04 09:10:41.615317 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:10:41.615562 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:10:41.615599 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:10:41.615700 | orchestrator | 2025-02-04 09:10:41.616319 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-02-04 09:10:41.616837 | orchestrator | Tuesday 04 February 2025 09:10:41 +0000 (0:00:06.748) 0:00:26.101 ****** 2025-02-04 09:10:42.109693 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:42.109850 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:42.111242 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:42.112081 | orchestrator | 2025-02-04 09:10:42.112992 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-02-04 09:10:42.114059 | orchestrator | Tuesday 04 February 2025 09:10:42 +0000 (0:00:00.495) 0:00:26.597 ****** 2025-02-04 09:10:45.221033 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-02-04 09:10:45.221275 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-02-04 09:10:45.221308 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-02-04 09:10:45.221332 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-02-04 09:10:45.222316 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-02-04 09:10:45.222758 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-02-04 09:10:45.223604 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-02-04 09:10:45.224216 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-02-04 09:10:45.224981 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-02-04 09:10:45.225278 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-02-04 09:10:45.226056 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-02-04 09:10:45.226313 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-02-04 09:10:45.226755 | orchestrator | 2025-02-04 09:10:45.227178 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-04 09:10:45.227844 | orchestrator | Tuesday 04 February 2025 09:10:45 +0000 (0:00:03.110) 0:00:29.707 ****** 2025-02-04 09:10:46.095008 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:46.098412 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:46.099206 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:46.100366 | orchestrator | 2025-02-04 09:10:46.101587 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-04 09:10:46.104401 | orchestrator | 2025-02-04 09:10:46.104491 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-04 09:10:50.247986 | orchestrator | Tuesday 04 February 2025 09:10:46 +0000 (0:00:00.872) 0:00:30.580 ****** 2025-02-04 09:10:50.248176 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:50.248265 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:50.249641 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:50.250406 | orchestrator | ok: [testbed-manager] 2025-02-04 09:10:50.251075 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:50.252240 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:50.253189 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:50.254164 | orchestrator | 2025-02-04 09:10:50.255012 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:10:50.255736 | orchestrator | 2025-02-04 09:10:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:10:50.256550 | orchestrator | 2025-02-04 09:10:50 | INFO  | Please wait and do not abort execution. 2025-02-04 09:10:50.257700 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:10:50.259094 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:10:50.259938 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:10:50.260513 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:10:50.261121 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:10:50.261645 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:10:50.262139 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:10:50.263078 | orchestrator | 2025-02-04 09:10:50.263312 | orchestrator | 2025-02-04 09:10:50.263934 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:10:50.264387 | orchestrator | Tuesday 04 February 2025 09:10:50 +0000 (0:00:04.154) 0:00:34.734 ****** 2025-02-04 09:10:50.264899 | orchestrator | =============================================================================== 2025-02-04 09:10:50.265310 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.76s 2025-02-04 09:10:50.265805 | orchestrator | Install required packages (Debian) -------------------------------------- 6.75s 2025-02-04 09:10:50.266413 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.15s 2025-02-04 09:10:50.266789 | orchestrator | Copy fact files --------------------------------------------------------- 3.11s 2025-02-04 09:10:50.267437 | orchestrator | Create custom facts directory ------------------------------------------- 1.48s 2025-02-04 09:10:50.268007 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-02-04 09:10:50.268635 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2025-02-04 09:10:50.269062 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.99s 2025-02-04 09:10:50.269590 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 0.87s 2025-02-04 09:10:50.269957 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.51s 2025-02-04 09:10:50.270760 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2025-02-04 09:10:50.270888 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-02-04 09:10:50.271400 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-02-04 09:10:50.271894 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.14s 2025-02-04 09:10:50.272191 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-02-04 09:10:50.272898 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-02-04 09:10:50.273279 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-02-04 09:10:50.273623 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2025-02-04 09:10:50.694722 | orchestrator | + osism apply bootstrap 2025-02-04 09:10:52.227383 | orchestrator | 2025-02-04 09:10:52 | INFO  | Task bdeabbc5-86dc-4eb3-85ba-a9950de5d598 (bootstrap) was prepared for execution. 2025-02-04 09:10:55.595518 | orchestrator | 2025-02-04 09:10:52 | INFO  | It takes a moment until task bdeabbc5-86dc-4eb3-85ba-a9950de5d598 (bootstrap) has been started and output is visible here. 2025-02-04 09:10:55.595667 | orchestrator | 2025-02-04 09:10:55.597115 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-02-04 09:10:55.597151 | orchestrator | 2025-02-04 09:10:55.597731 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-02-04 09:10:55.598238 | orchestrator | Tuesday 04 February 2025 09:10:55 +0000 (0:00:00.113) 0:00:00.113 ****** 2025-02-04 09:10:55.680510 | orchestrator | ok: [testbed-manager] 2025-02-04 09:10:55.711139 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:55.737018 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:55.763721 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:55.840948 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:55.842371 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:55.842435 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:55.842871 | orchestrator | 2025-02-04 09:10:55.845123 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-04 09:10:55.845420 | orchestrator | 2025-02-04 09:10:55.846677 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-04 09:10:55.847875 | orchestrator | Tuesday 04 February 2025 09:10:55 +0000 (0:00:00.249) 0:00:00.363 ****** 2025-02-04 09:10:59.465230 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:10:59.465623 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:10:59.466993 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:10:59.468382 | orchestrator | ok: [testbed-manager] 2025-02-04 09:10:59.469986 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:10:59.470058 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:10:59.471021 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:10:59.471769 | orchestrator | 2025-02-04 09:10:59.472371 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-02-04 09:10:59.473884 | orchestrator | 2025-02-04 09:10:59.474126 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-04 09:10:59.475198 | orchestrator | Tuesday 04 February 2025 09:10:59 +0000 (0:00:03.624) 0:00:03.988 ****** 2025-02-04 09:10:59.561732 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-02-04 09:10:59.605394 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-02-04 09:10:59.605617 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-02-04 09:10:59.605657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:10:59.605738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-02-04 09:10:59.606088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:10:59.606545 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-02-04 09:10:59.606936 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-02-04 09:10:59.607267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:10:59.607630 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-02-04 09:10:59.653782 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-02-04 09:10:59.654099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:10:59.654139 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-02-04 09:10:59.654590 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-04 09:10:59.654935 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-04 09:10:59.655238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:10:59.655591 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-02-04 09:10:59.895637 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:10:59.899371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:10:59.900056 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-04 09:10:59.900089 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:10:59.900105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-02-04 09:10:59.900119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-04 09:10:59.900139 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-04 09:10:59.900570 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-02-04 09:10:59.901403 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-04 09:10:59.901905 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-04 09:10:59.902329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:10:59.903012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-04 09:10:59.903526 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:10:59.904022 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-02-04 09:10:59.904863 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-04 09:10:59.905428 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:10:59.906174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:10:59.906516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:10:59.906881 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-04 09:10:59.907363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:10:59.908706 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:10:59.909200 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-04 09:10:59.910184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:10:59.910798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:10:59.911399 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:10:59.912241 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-04 09:10:59.912564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:10:59.913147 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:10:59.914193 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:10:59.914511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:10:59.915066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:10:59.915588 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:10:59.915978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:10:59.916386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:10:59.917386 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:10:59.918255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:10:59.921523 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:10:59.922162 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:10:59.922763 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:10:59.923121 | orchestrator | 2025-02-04 09:10:59.923686 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-02-04 09:10:59.924014 | orchestrator | 2025-02-04 09:10:59.925116 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-02-04 09:10:59.925930 | orchestrator | Tuesday 04 February 2025 09:10:59 +0000 (0:00:00.430) 0:00:04.418 ****** 2025-02-04 09:10:59.986070 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:00.011358 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:00.047092 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:00.108337 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:00.111147 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:00.111309 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:00.111334 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:00.111354 | orchestrator | 2025-02-04 09:11:00.112226 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-02-04 09:11:00.112787 | orchestrator | Tuesday 04 February 2025 09:11:00 +0000 (0:00:00.211) 0:00:04.630 ****** 2025-02-04 09:11:01.556996 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:01.557488 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:01.557601 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:01.557626 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:01.561830 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:01.562134 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:01.562423 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:01.563300 | orchestrator | 2025-02-04 09:11:01.565245 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-02-04 09:11:01.568920 | orchestrator | Tuesday 04 February 2025 09:11:01 +0000 (0:00:01.447) 0:00:06.077 ****** 2025-02-04 09:11:02.849677 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:02.849870 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:02.853291 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:02.853803 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:02.853843 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:02.853860 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:02.853874 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:02.853894 | orchestrator | 2025-02-04 09:11:02.854467 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-02-04 09:11:02.855133 | orchestrator | Tuesday 04 February 2025 09:11:02 +0000 (0:00:01.294) 0:00:07.371 ****** 2025-02-04 09:11:03.136904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:03.139235 | orchestrator | 2025-02-04 09:11:03.140397 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-02-04 09:11:03.140561 | orchestrator | Tuesday 04 February 2025 09:11:03 +0000 (0:00:00.285) 0:00:07.657 ****** 2025-02-04 09:11:05.320404 | orchestrator | changed: [testbed-manager] 2025-02-04 09:11:05.323276 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:05.323363 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:05.323394 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:05.323755 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:05.326112 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:05.326665 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:05.328003 | orchestrator | 2025-02-04 09:11:05.328933 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-02-04 09:11:05.329974 | orchestrator | Tuesday 04 February 2025 09:11:05 +0000 (0:00:02.183) 0:00:09.841 ****** 2025-02-04 09:11:05.411580 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:05.600753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:05.601043 | orchestrator | 2025-02-04 09:11:05.602241 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-02-04 09:11:05.602936 | orchestrator | Tuesday 04 February 2025 09:11:05 +0000 (0:00:00.282) 0:00:10.123 ****** 2025-02-04 09:11:06.672839 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:06.674116 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:06.674268 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:06.675558 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:06.676511 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:06.677883 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:06.678954 | orchestrator | 2025-02-04 09:11:06.681299 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-02-04 09:11:06.748610 | orchestrator | Tuesday 04 February 2025 09:11:06 +0000 (0:00:01.070) 0:00:11.194 ****** 2025-02-04 09:11:06.748732 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:07.321023 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:07.321198 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:07.322221 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:07.323158 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:07.324074 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:07.324584 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:07.325301 | orchestrator | 2025-02-04 09:11:07.325543 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-02-04 09:11:07.326326 | orchestrator | Tuesday 04 February 2025 09:11:07 +0000 (0:00:00.648) 0:00:11.843 ****** 2025-02-04 09:11:07.440803 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:11:07.465646 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:11:07.488314 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:11:07.761342 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:11:07.761783 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:11:07.762668 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:11:07.762783 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:07.762804 | orchestrator | 2025-02-04 09:11:07.762821 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-02-04 09:11:07.762851 | orchestrator | Tuesday 04 February 2025 09:11:07 +0000 (0:00:00.437) 0:00:12.280 ****** 2025-02-04 09:11:07.835278 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:07.863296 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:11:07.885701 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:11:07.913273 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:11:07.976110 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:11:07.976339 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:11:07.977513 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:11:07.977986 | orchestrator | 2025-02-04 09:11:07.978942 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-02-04 09:11:07.979111 | orchestrator | Tuesday 04 February 2025 09:11:07 +0000 (0:00:00.218) 0:00:12.499 ****** 2025-02-04 09:11:08.290330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:08.291667 | orchestrator | 2025-02-04 09:11:08.291769 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-02-04 09:11:08.293325 | orchestrator | Tuesday 04 February 2025 09:11:08 +0000 (0:00:00.313) 0:00:12.813 ****** 2025-02-04 09:11:08.609132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:08.611810 | orchestrator | 2025-02-04 09:11:10.025599 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-02-04 09:11:10.025835 | orchestrator | Tuesday 04 February 2025 09:11:08 +0000 (0:00:00.316) 0:00:13.130 ****** 2025-02-04 09:11:10.025900 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:10.026080 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:10.027097 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:10.028566 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:10.029147 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:10.029777 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:10.030579 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:10.031199 | orchestrator | 2025-02-04 09:11:10.031663 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-02-04 09:11:10.032169 | orchestrator | Tuesday 04 February 2025 09:11:10 +0000 (0:00:01.416) 0:00:14.546 ****** 2025-02-04 09:11:10.106138 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:10.127742 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:11:10.157764 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:11:10.177018 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:11:10.233558 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:11:10.233698 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:11:10.237333 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:11:10.710618 | orchestrator | 2025-02-04 09:11:10.710810 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-02-04 09:11:10.710847 | orchestrator | Tuesday 04 February 2025 09:11:10 +0000 (0:00:00.209) 0:00:14.755 ****** 2025-02-04 09:11:10.710893 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:10.711002 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:10.712617 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:10.713685 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:10.714855 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:10.715135 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:10.716523 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:10.717309 | orchestrator | 2025-02-04 09:11:10.717884 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-02-04 09:11:10.718670 | orchestrator | Tuesday 04 February 2025 09:11:10 +0000 (0:00:00.477) 0:00:15.233 ****** 2025-02-04 09:11:10.799616 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:10.822090 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:11:10.845106 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:11:10.866857 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:11:10.931636 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:11:10.932606 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:11:10.932675 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:11:10.933121 | orchestrator | 2025-02-04 09:11:10.933886 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-02-04 09:11:10.934572 | orchestrator | Tuesday 04 February 2025 09:11:10 +0000 (0:00:00.220) 0:00:15.454 ****** 2025-02-04 09:11:11.414592 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:11.415001 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:11.416040 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:11.417665 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:11.418420 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:11.419622 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:11.420494 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:11.421081 | orchestrator | 2025-02-04 09:11:11.421905 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-02-04 09:11:11.422733 | orchestrator | Tuesday 04 February 2025 09:11:11 +0000 (0:00:00.482) 0:00:15.936 ****** 2025-02-04 09:11:12.457817 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:12.458121 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:12.458153 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:12.458169 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:12.458905 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:12.459982 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:12.461076 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:12.461763 | orchestrator | 2025-02-04 09:11:12.463069 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-02-04 09:11:13.721592 | orchestrator | Tuesday 04 February 2025 09:11:12 +0000 (0:00:01.040) 0:00:16.977 ****** 2025-02-04 09:11:13.721739 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:13.721830 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:13.721855 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:13.721953 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:13.723839 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:13.723946 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:13.724397 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:13.725804 | orchestrator | 2025-02-04 09:11:13.725941 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-02-04 09:11:13.726665 | orchestrator | Tuesday 04 February 2025 09:11:13 +0000 (0:00:01.266) 0:00:18.243 ****** 2025-02-04 09:11:14.073370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:14.074231 | orchestrator | 2025-02-04 09:11:14.074358 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-02-04 09:11:14.077887 | orchestrator | Tuesday 04 February 2025 09:11:14 +0000 (0:00:00.350) 0:00:18.594 ****** 2025-02-04 09:11:14.171325 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:15.440749 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:15.441966 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:15.441995 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:15.442543 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:15.442665 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:15.443029 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:15.443466 | orchestrator | 2025-02-04 09:11:15.443651 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-02-04 09:11:15.444050 | orchestrator | Tuesday 04 February 2025 09:11:15 +0000 (0:00:01.364) 0:00:19.959 ****** 2025-02-04 09:11:15.546893 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:15.584162 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:15.612289 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:15.636932 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:15.702187 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:15.703154 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:15.707228 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:15.707674 | orchestrator | 2025-02-04 09:11:15.708680 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-02-04 09:11:15.709423 | orchestrator | Tuesday 04 February 2025 09:11:15 +0000 (0:00:00.264) 0:00:20.224 ****** 2025-02-04 09:11:15.811809 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:15.844384 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:15.861850 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:15.955570 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:15.956540 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:15.960297 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:16.037334 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:16.037476 | orchestrator | 2025-02-04 09:11:16.037497 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-02-04 09:11:16.037512 | orchestrator | Tuesday 04 February 2025 09:11:15 +0000 (0:00:00.250) 0:00:20.475 ****** 2025-02-04 09:11:16.037541 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:16.064800 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:16.096652 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:16.124126 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:16.185775 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:16.186041 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:16.186731 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:16.188629 | orchestrator | 2025-02-04 09:11:16.189138 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-02-04 09:11:16.189163 | orchestrator | Tuesday 04 February 2025 09:11:16 +0000 (0:00:00.233) 0:00:20.708 ****** 2025-02-04 09:11:16.545125 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:16.546512 | orchestrator | 2025-02-04 09:11:16.546559 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-02-04 09:11:16.547080 | orchestrator | Tuesday 04 February 2025 09:11:16 +0000 (0:00:00.355) 0:00:21.064 ****** 2025-02-04 09:11:17.149439 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:17.150388 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:17.151151 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:17.152241 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:17.153640 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:17.154661 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:17.155010 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:17.155758 | orchestrator | 2025-02-04 09:11:17.156429 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-02-04 09:11:17.156775 | orchestrator | Tuesday 04 February 2025 09:11:17 +0000 (0:00:00.605) 0:00:21.670 ****** 2025-02-04 09:11:17.230944 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:17.256505 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:11:17.281091 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:11:17.320681 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:11:17.403247 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:11:17.405148 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:11:17.407957 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:11:17.408779 | orchestrator | 2025-02-04 09:11:17.409592 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-02-04 09:11:17.410785 | orchestrator | Tuesday 04 February 2025 09:11:17 +0000 (0:00:00.255) 0:00:21.925 ****** 2025-02-04 09:11:18.471152 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:18.472260 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:18.473583 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:18.474755 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:18.475388 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:18.476773 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:18.477853 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:18.479006 | orchestrator | 2025-02-04 09:11:18.480036 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-02-04 09:11:18.480724 | orchestrator | Tuesday 04 February 2025 09:11:18 +0000 (0:00:01.066) 0:00:22.991 ****** 2025-02-04 09:11:19.037327 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:19.037554 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:19.037611 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:19.038292 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:19.038921 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:19.039699 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:19.040814 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:19.041008 | orchestrator | 2025-02-04 09:11:19.041605 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-02-04 09:11:19.042585 | orchestrator | Tuesday 04 February 2025 09:11:19 +0000 (0:00:00.565) 0:00:23.557 ****** 2025-02-04 09:11:20.172398 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:20.172776 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:20.173733 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:20.174990 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:20.175961 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:20.176229 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:20.176760 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:20.177472 | orchestrator | 2025-02-04 09:11:20.179025 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-02-04 09:11:20.179904 | orchestrator | Tuesday 04 February 2025 09:11:20 +0000 (0:00:01.136) 0:00:24.693 ****** 2025-02-04 09:11:33.780064 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:33.780553 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:33.780594 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:33.780618 | orchestrator | changed: [testbed-manager] 2025-02-04 09:11:33.782619 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:33.782719 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:33.784038 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:33.784580 | orchestrator | 2025-02-04 09:11:33.785071 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-02-04 09:11:33.785485 | orchestrator | Tuesday 04 February 2025 09:11:33 +0000 (0:00:13.605) 0:00:38.298 ****** 2025-02-04 09:11:33.872474 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:33.906666 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:33.939383 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:33.976485 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:34.054103 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:34.054595 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:34.055864 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:34.055960 | orchestrator | 2025-02-04 09:11:34.056630 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-02-04 09:11:34.057199 | orchestrator | Tuesday 04 February 2025 09:11:34 +0000 (0:00:00.278) 0:00:38.576 ****** 2025-02-04 09:11:34.134757 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:34.169169 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:34.194868 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:34.225527 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:34.305268 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:34.305707 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:34.305920 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:34.306936 | orchestrator | 2025-02-04 09:11:34.307423 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-02-04 09:11:34.308019 | orchestrator | Tuesday 04 February 2025 09:11:34 +0000 (0:00:00.250) 0:00:38.827 ****** 2025-02-04 09:11:34.392702 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:34.413318 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:34.444633 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:34.465834 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:34.529560 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:34.530611 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:34.531621 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:34.532538 | orchestrator | 2025-02-04 09:11:34.533244 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-02-04 09:11:34.534184 | orchestrator | Tuesday 04 February 2025 09:11:34 +0000 (0:00:00.224) 0:00:39.052 ****** 2025-02-04 09:11:34.847318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:34.848639 | orchestrator | 2025-02-04 09:11:34.849846 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-02-04 09:11:34.850901 | orchestrator | Tuesday 04 February 2025 09:11:34 +0000 (0:00:00.315) 0:00:39.368 ****** 2025-02-04 09:11:36.443742 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:36.444372 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:36.444422 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:36.445621 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:36.446152 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:36.446184 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:36.448021 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:36.448960 | orchestrator | 2025-02-04 09:11:36.449606 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-02-04 09:11:36.450235 | orchestrator | Tuesday 04 February 2025 09:11:36 +0000 (0:00:01.594) 0:00:40.962 ****** 2025-02-04 09:11:37.604531 | orchestrator | changed: [testbed-manager] 2025-02-04 09:11:37.605224 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:37.606255 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:37.607164 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:37.607570 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:37.608323 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:37.608918 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:37.609825 | orchestrator | 2025-02-04 09:11:37.610355 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-02-04 09:11:37.610706 | orchestrator | Tuesday 04 February 2025 09:11:37 +0000 (0:00:01.162) 0:00:42.124 ****** 2025-02-04 09:11:38.412610 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:38.413760 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:38.415151 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:38.415500 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:38.416433 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:38.417389 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:38.418349 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:38.419197 | orchestrator | 2025-02-04 09:11:38.419983 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-02-04 09:11:38.420626 | orchestrator | Tuesday 04 February 2025 09:11:38 +0000 (0:00:00.808) 0:00:42.933 ****** 2025-02-04 09:11:38.730383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:38.730680 | orchestrator | 2025-02-04 09:11:38.730724 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-02-04 09:11:38.730760 | orchestrator | Tuesday 04 February 2025 09:11:38 +0000 (0:00:00.317) 0:00:43.251 ****** 2025-02-04 09:11:39.858838 | orchestrator | changed: [testbed-manager] 2025-02-04 09:11:39.858995 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:39.860199 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:39.861143 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:39.862837 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:39.863948 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:39.865183 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:39.866540 | orchestrator | 2025-02-04 09:11:39.867274 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-02-04 09:11:39.868541 | orchestrator | Tuesday 04 February 2025 09:11:39 +0000 (0:00:01.127) 0:00:44.378 ****** 2025-02-04 09:11:39.965739 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:11:39.994549 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:11:40.025342 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:11:40.210782 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:11:40.211022 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:11:40.212820 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:11:40.213528 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:11:40.215206 | orchestrator | 2025-02-04 09:11:40.215391 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-02-04 09:11:40.216520 | orchestrator | Tuesday 04 February 2025 09:11:40 +0000 (0:00:00.355) 0:00:44.733 ****** 2025-02-04 09:11:52.956133 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:52.956309 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:52.956330 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:52.956350 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:52.957831 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:52.958310 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:52.958548 | orchestrator | changed: [testbed-manager] 2025-02-04 09:11:52.959265 | orchestrator | 2025-02-04 09:11:52.960874 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-02-04 09:11:52.961832 | orchestrator | Tuesday 04 February 2025 09:11:52 +0000 (0:00:12.741) 0:00:57.474 ****** 2025-02-04 09:11:53.687617 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:53.690708 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:53.690768 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:53.690806 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:53.693599 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:53.693646 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:53.694533 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:53.695432 | orchestrator | 2025-02-04 09:11:53.695976 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-02-04 09:11:53.696587 | orchestrator | Tuesday 04 February 2025 09:11:53 +0000 (0:00:00.734) 0:00:58.208 ****** 2025-02-04 09:11:54.776072 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:54.779346 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:54.779565 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:54.779607 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:54.779633 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:54.779657 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:54.779686 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:54.780011 | orchestrator | 2025-02-04 09:11:54.780439 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-02-04 09:11:54.780765 | orchestrator | Tuesday 04 February 2025 09:11:54 +0000 (0:00:01.088) 0:00:59.297 ****** 2025-02-04 09:11:54.835685 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-04 09:11:54.858344 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:54.895638 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:54.922825 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:54.948759 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:55.034116 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:55.034646 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:55.037150 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:55.037407 | orchestrator | 2025-02-04 09:11:55.037439 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-02-04 09:11:55.037694 | orchestrator | Tuesday 04 February 2025 09:11:55 +0000 (0:00:00.257) 0:00:59.555 ****** 2025-02-04 09:11:55.115196 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:55.145780 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:55.170400 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:55.201388 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:55.277069 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:55.277369 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:55.277518 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:55.277930 | orchestrator | 2025-02-04 09:11:55.278418 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-02-04 09:11:55.282773 | orchestrator | Tuesday 04 February 2025 09:11:55 +0000 (0:00:00.243) 0:00:59.798 ****** 2025-02-04 09:11:55.590400 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:11:55.590621 | orchestrator | 2025-02-04 09:11:55.591049 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-02-04 09:11:55.591545 | orchestrator | Tuesday 04 February 2025 09:11:55 +0000 (0:00:00.314) 0:01:00.112 ****** 2025-02-04 09:11:57.356540 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:57.357208 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:57.357330 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:57.358070 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:57.358106 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:57.359878 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:57.360826 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:57.362096 | orchestrator | 2025-02-04 09:11:57.362691 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-02-04 09:11:57.363313 | orchestrator | Tuesday 04 February 2025 09:11:57 +0000 (0:00:01.762) 0:01:01.875 ****** 2025-02-04 09:11:57.974332 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:11:57.974639 | orchestrator | changed: [testbed-manager] 2025-02-04 09:11:57.976112 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:11:57.976540 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:11:57.977529 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:11:57.978146 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:11:57.978537 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:11:57.979180 | orchestrator | 2025-02-04 09:11:57.980022 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-02-04 09:11:57.980511 | orchestrator | Tuesday 04 February 2025 09:11:57 +0000 (0:00:00.620) 0:01:02.496 ****** 2025-02-04 09:11:58.056157 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:58.088534 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:58.117603 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:58.144793 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:58.229990 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:58.231856 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:58.234081 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:59.363853 | orchestrator | 2025-02-04 09:11:59.363977 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-02-04 09:11:59.364052 | orchestrator | Tuesday 04 February 2025 09:11:58 +0000 (0:00:00.255) 0:01:02.751 ****** 2025-02-04 09:11:59.364104 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:11:59.364213 | orchestrator | ok: [testbed-manager] 2025-02-04 09:11:59.364714 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:11:59.365774 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:11:59.366727 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:11:59.367402 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:11:59.368480 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:11:59.369739 | orchestrator | 2025-02-04 09:11:59.370298 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-02-04 09:11:59.370705 | orchestrator | Tuesday 04 February 2025 09:11:59 +0000 (0:00:01.128) 0:01:03.880 ****** 2025-02-04 09:12:01.122605 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:12:01.123612 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:12:01.123699 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:12:01.124860 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:12:01.125690 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:12:01.126098 | orchestrator | ok: [testbed-manager] 2025-02-04 09:12:01.126488 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:12:01.127011 | orchestrator | 2025-02-04 09:12:01.127727 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-02-04 09:12:01.128152 | orchestrator | Tuesday 04 February 2025 09:12:01 +0000 (0:00:01.762) 0:01:05.643 ****** 2025-02-04 09:12:06.788267 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:12:06.788498 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:12:06.788814 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:12:06.789687 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:12:06.789947 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:12:06.790484 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:12:06.790858 | orchestrator | changed: [testbed-manager] 2025-02-04 09:12:06.791365 | orchestrator | 2025-02-04 09:12:06.792065 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-02-04 09:12:06.793294 | orchestrator | Tuesday 04 February 2025 09:12:06 +0000 (0:00:05.663) 0:01:11.307 ****** 2025-02-04 09:12:41.202417 | orchestrator | ok: [testbed-manager] 2025-02-04 09:12:41.202782 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:12:41.202816 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:12:41.202829 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:12:41.202842 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:12:41.202861 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:12:41.203872 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:12:41.204726 | orchestrator | 2025-02-04 09:12:41.205274 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-02-04 09:12:41.205784 | orchestrator | Tuesday 04 February 2025 09:12:41 +0000 (0:00:34.413) 0:01:45.721 ****** 2025-02-04 09:13:55.362835 | orchestrator | changed: [testbed-manager] 2025-02-04 09:13:55.363362 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:13:55.363437 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:13:55.363477 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:13:55.363485 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:13:55.363490 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:13:55.363503 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:13:55.364411 | orchestrator | 2025-02-04 09:13:55.364910 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-02-04 09:13:55.365334 | orchestrator | Tuesday 04 February 2025 09:13:55 +0000 (0:01:14.160) 0:02:59.881 ****** 2025-02-04 09:13:57.316959 | orchestrator | changed: [testbed-manager] 2025-02-04 09:13:57.317169 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:13:57.317958 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:13:57.318949 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:13:57.319437 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:13:57.319929 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:13:57.321386 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:13:57.322311 | orchestrator | 2025-02-04 09:13:57.322732 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-02-04 09:13:57.323029 | orchestrator | Tuesday 04 February 2025 09:13:57 +0000 (0:00:01.955) 0:03:01.837 ****** 2025-02-04 09:14:03.700700 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:03.700892 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:03.703504 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:03.704517 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:03.704557 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:03.704580 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:03.705648 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:03.705874 | orchestrator | 2025-02-04 09:14:03.707611 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-02-04 09:14:03.707942 | orchestrator | Tuesday 04 February 2025 09:14:03 +0000 (0:00:06.382) 0:03:08.219 ****** 2025-02-04 09:14:04.150807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-02-04 09:14:04.151603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-02-04 09:14:04.151785 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-02-04 09:14:04.152300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-02-04 09:14:04.153194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-02-04 09:14:04.153683 | orchestrator | 2025-02-04 09:14:04.155040 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-02-04 09:14:04.156079 | orchestrator | Tuesday 04 February 2025 09:14:04 +0000 (0:00:00.454) 0:03:08.673 ****** 2025-02-04 09:14:04.211434 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-04 09:14:04.245423 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:04.332160 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-04 09:14:04.332315 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-04 09:14:04.876782 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:14:04.877486 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:14:04.878276 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-02-04 09:14:04.878323 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:14:04.878745 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-04 09:14:04.879363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-04 09:14:04.879824 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-04 09:14:04.880106 | orchestrator | 2025-02-04 09:14:04.881164 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-02-04 09:14:04.979179 | orchestrator | Tuesday 04 February 2025 09:14:04 +0000 (0:00:00.723) 0:03:09.397 ****** 2025-02-04 09:14:04.979339 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-04 09:14:04.979418 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-04 09:14:04.979437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-04 09:14:04.979519 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-04 09:14:04.979547 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-04 09:14:04.979572 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-04 09:14:04.979595 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-04 09:14:04.979617 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-04 09:14:04.979911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-04 09:14:05.008842 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-04 09:14:05.008940 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:05.119265 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-04 09:14:05.119439 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-04 09:14:05.119524 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-04 09:14:05.119545 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-04 09:14:05.120010 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-04 09:14:05.120586 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-04 09:14:05.120939 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-04 09:14:05.121125 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-04 09:14:05.121575 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-04 09:14:05.121672 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-04 09:14:05.122419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-04 09:14:05.122979 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-04 09:14:05.123661 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-04 09:14:05.123763 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-04 09:14:05.124622 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-04 09:14:05.125349 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-04 09:14:10.842730 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:14:10.842971 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-04 09:14:10.843039 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-04 09:14:10.843102 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-04 09:14:10.843836 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-04 09:14:10.844514 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:14:10.844555 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-02-04 09:14:10.845052 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-02-04 09:14:10.845534 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-02-04 09:14:10.845589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-02-04 09:14:10.845989 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-02-04 09:14:10.846496 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-02-04 09:14:10.846813 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-02-04 09:14:10.847186 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-02-04 09:14:10.848138 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-02-04 09:14:10.848584 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-02-04 09:14:10.848646 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:14:10.848735 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-04 09:14:10.849129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-04 09:14:10.849946 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-02-04 09:14:10.850613 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-04 09:14:10.851153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-04 09:14:10.851949 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-02-04 09:14:10.852360 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-04 09:14:10.852407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-04 09:14:10.852560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-04 09:14:10.852767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-04 09:14:10.853595 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-04 09:14:10.853922 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-04 09:14:10.854302 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-04 09:14:10.854767 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-02-04 09:14:10.855098 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-04 09:14:10.855387 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-02-04 09:14:10.856034 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-04 09:14:10.856840 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-02-04 09:14:10.856927 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-04 09:14:10.857336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-04 09:14:10.858298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-02-04 09:14:10.859873 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-04 09:14:10.861014 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-04 09:14:10.861388 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-02-04 09:14:10.861883 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-04 09:14:10.862180 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-04 09:14:10.862941 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-02-04 09:14:10.864351 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-02-04 09:14:10.864662 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-04 09:14:10.865399 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-02-04 09:14:10.865503 | orchestrator | 2025-02-04 09:14:10.865631 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-02-04 09:14:10.866354 | orchestrator | Tuesday 04 February 2025 09:14:10 +0000 (0:00:05.966) 0:03:15.363 ****** 2025-02-04 09:14:11.452272 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.453154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.453801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.455075 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.455539 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.455980 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.457771 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-02-04 09:14:11.458560 | orchestrator | 2025-02-04 09:14:11.458599 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-02-04 09:14:11.459514 | orchestrator | Tuesday 04 February 2025 09:14:11 +0000 (0:00:00.608) 0:03:15.972 ****** 2025-02-04 09:14:11.530828 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-04 09:14:11.531141 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-04 09:14:11.564372 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:11.564612 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-04 09:14:11.600035 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:14:11.600390 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-02-04 09:14:11.626299 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:14:11.665251 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:14:14.064689 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-04 09:14:14.064950 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-04 09:14:14.064988 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-02-04 09:14:14.065003 | orchestrator | 2025-02-04 09:14:14.065026 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-02-04 09:14:14.127441 | orchestrator | Tuesday 04 February 2025 09:14:14 +0000 (0:00:02.613) 0:03:18.585 ****** 2025-02-04 09:14:14.127618 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-04 09:14:14.153711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-04 09:14:14.153803 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:14.190349 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:14:14.190919 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-04 09:14:14.216855 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:14:14.216980 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-02-04 09:14:14.246724 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:14:14.786879 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-04 09:14:14.787263 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-04 09:14:14.788614 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-02-04 09:14:14.789839 | orchestrator | 2025-02-04 09:14:14.791101 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-02-04 09:14:14.791824 | orchestrator | Tuesday 04 February 2025 09:14:14 +0000 (0:00:00.723) 0:03:19.308 ****** 2025-02-04 09:14:14.886094 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:14.909788 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:14:14.939507 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:14:14.962497 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:14:15.096215 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:14:15.096412 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:14:15.096443 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:14:15.097190 | orchestrator | 2025-02-04 09:14:15.097372 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-02-04 09:14:15.097751 | orchestrator | Tuesday 04 February 2025 09:14:15 +0000 (0:00:00.308) 0:03:19.617 ****** 2025-02-04 09:14:21.088723 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:21.122833 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:21.122958 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:21.122977 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:21.122991 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:21.123005 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:21.123020 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:21.123066 | orchestrator | 2025-02-04 09:14:21.123084 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-02-04 09:14:21.123101 | orchestrator | Tuesday 04 February 2025 09:14:21 +0000 (0:00:05.989) 0:03:25.606 ****** 2025-02-04 09:14:21.123134 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-02-04 09:14:21.170011 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:21.208031 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-02-04 09:14:21.208261 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:14:21.208343 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-02-04 09:14:21.243583 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:14:21.286802 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-02-04 09:14:21.287061 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-02-04 09:14:21.340273 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:14:21.342125 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-02-04 09:14:21.424395 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:14:21.426321 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:14:21.427982 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-02-04 09:14:21.429243 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:14:21.431144 | orchestrator | 2025-02-04 09:14:21.431895 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-02-04 09:14:21.433023 | orchestrator | Tuesday 04 February 2025 09:14:21 +0000 (0:00:00.336) 0:03:25.943 ****** 2025-02-04 09:14:22.736650 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-02-04 09:14:22.736777 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-02-04 09:14:22.737200 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-02-04 09:14:22.737703 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-02-04 09:14:22.737929 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-02-04 09:14:22.738883 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-02-04 09:14:22.739026 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-02-04 09:14:22.739512 | orchestrator | 2025-02-04 09:14:22.739772 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-02-04 09:14:22.739959 | orchestrator | Tuesday 04 February 2025 09:14:22 +0000 (0:00:01.315) 0:03:27.259 ****** 2025-02-04 09:14:23.199522 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:14:23.199750 | orchestrator | 2025-02-04 09:14:23.200789 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-02-04 09:14:23.201398 | orchestrator | Tuesday 04 February 2025 09:14:23 +0000 (0:00:00.463) 0:03:27.722 ****** 2025-02-04 09:14:24.629695 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:24.629873 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:24.630438 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:24.631268 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:24.632914 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:24.633360 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:24.633960 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:24.634656 | orchestrator | 2025-02-04 09:14:24.635127 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-02-04 09:14:24.635637 | orchestrator | Tuesday 04 February 2025 09:14:24 +0000 (0:00:01.426) 0:03:29.149 ****** 2025-02-04 09:14:25.271031 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:25.271215 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:25.271759 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:25.273217 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:25.274671 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:25.275576 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:25.275608 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:25.276580 | orchestrator | 2025-02-04 09:14:25.277063 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-02-04 09:14:25.277591 | orchestrator | Tuesday 04 February 2025 09:14:25 +0000 (0:00:00.643) 0:03:29.793 ****** 2025-02-04 09:14:25.947933 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:25.949479 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:25.952199 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:25.952241 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:25.952704 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:25.952901 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:25.953760 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:25.954095 | orchestrator | 2025-02-04 09:14:25.954130 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-02-04 09:14:25.954572 | orchestrator | Tuesday 04 February 2025 09:14:25 +0000 (0:00:00.673) 0:03:30.466 ****** 2025-02-04 09:14:26.567142 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:26.567307 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:26.567650 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:26.567982 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:26.568412 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:26.569138 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:26.569216 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:26.571044 | orchestrator | 2025-02-04 09:14:26.571371 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-02-04 09:14:26.572167 | orchestrator | Tuesday 04 February 2025 09:14:26 +0000 (0:00:00.624) 0:03:31.090 ****** 2025-02-04 09:14:27.588600 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738658719.9593668, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.588832 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738660197.9849498, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.590171 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738660198.0537634, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.590900 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738660198.0724647, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.591376 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738660198.0486627, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.592138 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738660198.0200295, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.593102 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1738660198.0721183, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.594083 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658743.8828406, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.595110 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658654.0590525, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.595745 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658646.8256443, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.596425 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658649.7924871, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.596960 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658658.9079578, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.598108 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658660.3987334, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.598268 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1738658656.3601868, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-02-04 09:14:27.599257 | orchestrator | 2025-02-04 09:14:27.599920 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-02-04 09:14:27.600189 | orchestrator | Tuesday 04 February 2025 09:14:27 +0000 (0:00:01.018) 0:03:32.109 ****** 2025-02-04 09:14:28.800705 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:28.800911 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:28.802854 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:28.803720 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:28.805863 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:28.807180 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:28.808104 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:28.808887 | orchestrator | 2025-02-04 09:14:28.810077 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-02-04 09:14:28.811045 | orchestrator | Tuesday 04 February 2025 09:14:28 +0000 (0:00:01.210) 0:03:33.320 ****** 2025-02-04 09:14:29.952520 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:29.953822 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:29.954283 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:29.955989 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:29.956696 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:29.957714 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:29.958550 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:29.959356 | orchestrator | 2025-02-04 09:14:29.960182 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-02-04 09:14:29.961164 | orchestrator | Tuesday 04 February 2025 09:14:29 +0000 (0:00:01.153) 0:03:34.474 ****** 2025-02-04 09:14:31.129370 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:31.129793 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:31.132166 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:31.132800 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:31.133308 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:31.134101 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:31.134788 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:31.135574 | orchestrator | 2025-02-04 09:14:31.136400 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-02-04 09:14:31.137274 | orchestrator | Tuesday 04 February 2025 09:14:31 +0000 (0:00:01.174) 0:03:35.648 ****** 2025-02-04 09:14:31.197489 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:14:31.230277 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:14:31.261908 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:14:31.293957 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:14:31.325093 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:14:31.383269 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:14:31.385114 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:14:31.386272 | orchestrator | 2025-02-04 09:14:31.387180 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-02-04 09:14:31.388602 | orchestrator | Tuesday 04 February 2025 09:14:31 +0000 (0:00:00.257) 0:03:35.906 ****** 2025-02-04 09:14:32.212824 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:32.213758 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:32.214805 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:32.215691 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:32.217210 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:32.217956 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:32.218984 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:32.220173 | orchestrator | 2025-02-04 09:14:32.221024 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-02-04 09:14:32.221717 | orchestrator | Tuesday 04 February 2025 09:14:32 +0000 (0:00:00.826) 0:03:36.732 ****** 2025-02-04 09:14:32.670880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:14:32.672969 | orchestrator | 2025-02-04 09:14:32.675728 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-02-04 09:14:40.863089 | orchestrator | Tuesday 04 February 2025 09:14:32 +0000 (0:00:00.460) 0:03:37.193 ****** 2025-02-04 09:14:40.863239 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:40.863869 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:40.864079 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:40.865567 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:40.866826 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:40.868281 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:40.869176 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:40.870436 | orchestrator | 2025-02-04 09:14:40.871619 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-02-04 09:14:40.872302 | orchestrator | Tuesday 04 February 2025 09:14:40 +0000 (0:00:08.189) 0:03:45.383 ****** 2025-02-04 09:14:42.352828 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:42.353016 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:42.355053 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:42.355264 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:42.355698 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:42.356482 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:42.356596 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:42.357046 | orchestrator | 2025-02-04 09:14:42.357586 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-02-04 09:14:42.358061 | orchestrator | Tuesday 04 February 2025 09:14:42 +0000 (0:00:01.489) 0:03:46.872 ****** 2025-02-04 09:14:43.482081 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:43.482304 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:43.485258 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:43.486416 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:43.486851 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:43.487406 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:43.488528 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:43.489045 | orchestrator | 2025-02-04 09:14:43.489567 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-02-04 09:14:43.490214 | orchestrator | Tuesday 04 February 2025 09:14:43 +0000 (0:00:01.130) 0:03:48.003 ****** 2025-02-04 09:14:43.951680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:14:43.952484 | orchestrator | 2025-02-04 09:14:43.953805 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-02-04 09:14:43.954680 | orchestrator | Tuesday 04 February 2025 09:14:43 +0000 (0:00:00.469) 0:03:48.472 ****** 2025-02-04 09:14:52.867276 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:52.868427 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:52.870931 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:52.871030 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:52.871476 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:52.871510 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:52.871731 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:52.872550 | orchestrator | 2025-02-04 09:14:52.873158 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-02-04 09:14:52.874295 | orchestrator | Tuesday 04 February 2025 09:14:52 +0000 (0:00:08.911) 0:03:57.384 ****** 2025-02-04 09:14:53.535187 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:53.537051 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:53.539844 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:53.540901 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:53.540936 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:53.540951 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:53.540971 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:53.541860 | orchestrator | 2025-02-04 09:14:53.542778 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-02-04 09:14:53.543071 | orchestrator | Tuesday 04 February 2025 09:14:53 +0000 (0:00:00.673) 0:03:58.057 ****** 2025-02-04 09:14:54.662768 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:54.663268 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:54.665389 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:54.667295 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:54.667334 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:54.667613 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:54.667643 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:54.668574 | orchestrator | 2025-02-04 09:14:54.669224 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-02-04 09:14:54.670059 | orchestrator | Tuesday 04 February 2025 09:14:54 +0000 (0:00:01.124) 0:03:59.182 ****** 2025-02-04 09:14:55.751199 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:14:55.753393 | orchestrator | changed: [testbed-manager] 2025-02-04 09:14:55.753863 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:14:55.753900 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:14:55.755217 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:14:55.756050 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:14:55.756920 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:14:55.757419 | orchestrator | 2025-02-04 09:14:55.758413 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-02-04 09:14:55.759264 | orchestrator | Tuesday 04 February 2025 09:14:55 +0000 (0:00:01.087) 0:04:00.270 ****** 2025-02-04 09:14:55.883045 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:55.921667 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:55.970977 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:56.004215 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:56.091554 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:56.095488 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:56.095553 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:56.095850 | orchestrator | 2025-02-04 09:14:56.098790 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-02-04 09:14:56.102413 | orchestrator | Tuesday 04 February 2025 09:14:56 +0000 (0:00:00.341) 0:04:00.611 ****** 2025-02-04 09:14:56.223810 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:56.259664 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:56.298006 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:56.336520 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:56.429189 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:56.429551 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:56.429583 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:56.430081 | orchestrator | 2025-02-04 09:14:56.431026 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-02-04 09:14:56.433002 | orchestrator | Tuesday 04 February 2025 09:14:56 +0000 (0:00:00.339) 0:04:00.951 ****** 2025-02-04 09:14:56.539588 | orchestrator | ok: [testbed-manager] 2025-02-04 09:14:56.577265 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:14:56.613365 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:14:56.650133 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:14:56.740623 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:14:56.741065 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:14:56.741780 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:14:56.745828 | orchestrator | 2025-02-04 09:14:56.745950 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-02-04 09:14:56.746614 | orchestrator | Tuesday 04 February 2025 09:14:56 +0000 (0:00:00.312) 0:04:01.263 ****** 2025-02-04 09:15:02.387618 | orchestrator | ok: [testbed-manager] 2025-02-04 09:15:02.388634 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:15:02.388715 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:15:02.389720 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:15:02.390211 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:15:02.391795 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:15:02.392259 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:15:02.392856 | orchestrator | 2025-02-04 09:15:02.393659 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-02-04 09:15:02.394434 | orchestrator | Tuesday 04 February 2025 09:15:02 +0000 (0:00:05.644) 0:04:06.908 ****** 2025-02-04 09:15:02.808389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:15:02.812099 | orchestrator | 2025-02-04 09:15:02.812242 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-02-04 09:15:02.813082 | orchestrator | Tuesday 04 February 2025 09:15:02 +0000 (0:00:00.419) 0:04:07.328 ****** 2025-02-04 09:15:02.849184 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-02-04 09:15:02.918224 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-02-04 09:15:02.973032 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-02-04 09:15:02.973154 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-02-04 09:15:02.973199 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:15:02.974279 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-02-04 09:15:02.974367 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-02-04 09:15:03.013160 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:15:03.058069 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:15:03.058754 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-02-04 09:15:03.059853 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-02-04 09:15:03.104975 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-02-04 09:15:03.105106 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:15:03.187993 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-02-04 09:15:03.188181 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-02-04 09:15:03.191994 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:15:03.193695 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-02-04 09:15:03.194924 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:15:03.195700 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-02-04 09:15:03.197911 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-02-04 09:15:03.198677 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:15:03.199182 | orchestrator | 2025-02-04 09:15:03.199546 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-02-04 09:15:03.200071 | orchestrator | Tuesday 04 February 2025 09:15:03 +0000 (0:00:00.381) 0:04:07.710 ****** 2025-02-04 09:15:03.770321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:15:03.771820 | orchestrator | 2025-02-04 09:15:03.771947 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-02-04 09:15:03.775214 | orchestrator | Tuesday 04 February 2025 09:15:03 +0000 (0:00:00.582) 0:04:08.292 ****** 2025-02-04 09:15:03.844443 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-02-04 09:15:03.885839 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:15:03.886074 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-02-04 09:15:03.928191 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:15:03.977444 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-02-04 09:15:03.977605 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-02-04 09:15:03.980800 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:15:03.980855 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-02-04 09:15:04.009443 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:15:04.096797 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:15:04.096949 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-02-04 09:15:04.096976 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:15:04.098229 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-02-04 09:15:04.099026 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:15:04.099239 | orchestrator | 2025-02-04 09:15:04.099616 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-02-04 09:15:04.100571 | orchestrator | Tuesday 04 February 2025 09:15:04 +0000 (0:00:00.327) 0:04:08.620 ****** 2025-02-04 09:15:04.598799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:15:04.602501 | orchestrator | 2025-02-04 09:15:04.602569 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-02-04 09:15:37.955614 | orchestrator | Tuesday 04 February 2025 09:15:04 +0000 (0:00:00.500) 0:04:09.120 ****** 2025-02-04 09:15:37.955773 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:15:37.955882 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:15:37.955922 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:15:37.956007 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:15:37.956673 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:15:37.957056 | orchestrator | changed: [testbed-manager] 2025-02-04 09:15:37.957720 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:15:37.958512 | orchestrator | 2025-02-04 09:15:37.959158 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-02-04 09:15:37.960034 | orchestrator | Tuesday 04 February 2025 09:15:37 +0000 (0:00:33.353) 0:04:42.474 ****** 2025-02-04 09:15:46.029124 | orchestrator | changed: [testbed-manager] 2025-02-04 09:15:46.029434 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:15:46.033092 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:15:46.036221 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:15:46.039261 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:15:46.040019 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:15:46.041721 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:15:46.043775 | orchestrator | 2025-02-04 09:15:54.151092 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-02-04 09:15:54.151225 | orchestrator | Tuesday 04 February 2025 09:15:46 +0000 (0:00:08.074) 0:04:50.548 ****** 2025-02-04 09:15:54.151293 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:15:54.151365 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:15:54.151383 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:15:54.151398 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:15:54.151417 | orchestrator | changed: [testbed-manager] 2025-02-04 09:15:54.153619 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:15:54.154101 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:15:54.154501 | orchestrator | 2025-02-04 09:15:54.155209 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-02-04 09:15:54.155395 | orchestrator | Tuesday 04 February 2025 09:15:54 +0000 (0:00:08.124) 0:04:58.672 ****** 2025-02-04 09:15:56.002504 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:15:56.003261 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:15:56.003325 | orchestrator | ok: [testbed-manager] 2025-02-04 09:15:56.005229 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:15:56.005851 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:15:56.006955 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:15:56.007527 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:15:56.008204 | orchestrator | 2025-02-04 09:15:56.009186 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-02-04 09:15:56.010594 | orchestrator | Tuesday 04 February 2025 09:15:55 +0000 (0:00:01.849) 0:05:00.522 ****** 2025-02-04 09:16:02.004696 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:02.006630 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:02.006689 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:02.007275 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:02.008251 | orchestrator | changed: [testbed-manager] 2025-02-04 09:16:02.009047 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:02.009851 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:02.010629 | orchestrator | 2025-02-04 09:16:02.011303 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-02-04 09:16:02.011912 | orchestrator | Tuesday 04 February 2025 09:16:01 +0000 (0:00:06.003) 0:05:06.525 ****** 2025-02-04 09:16:02.473420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:16:02.474181 | orchestrator | 2025-02-04 09:16:02.474242 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-02-04 09:16:02.475197 | orchestrator | Tuesday 04 February 2025 09:16:02 +0000 (0:00:00.469) 0:05:06.995 ****** 2025-02-04 09:16:03.254980 | orchestrator | changed: [testbed-manager] 2025-02-04 09:16:03.255237 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:03.255609 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:03.255661 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:03.258214 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:03.258508 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:03.258704 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:03.258729 | orchestrator | 2025-02-04 09:16:03.258745 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-02-04 09:16:03.258765 | orchestrator | Tuesday 04 February 2025 09:16:03 +0000 (0:00:00.780) 0:05:07.775 ****** 2025-02-04 09:16:04.971640 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:16:04.971868 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:04.971936 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:16:04.972748 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:16:04.973948 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:16:04.975706 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:16:04.977264 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:16:04.978100 | orchestrator | 2025-02-04 09:16:04.979199 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-02-04 09:16:04.979959 | orchestrator | Tuesday 04 February 2025 09:16:04 +0000 (0:00:01.717) 0:05:09.493 ****** 2025-02-04 09:16:05.789061 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:05.791416 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:05.791588 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:05.791614 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:05.793020 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:05.794089 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:05.794889 | orchestrator | changed: [testbed-manager] 2025-02-04 09:16:05.796030 | orchestrator | 2025-02-04 09:16:05.796931 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-02-04 09:16:05.797491 | orchestrator | Tuesday 04 February 2025 09:16:05 +0000 (0:00:00.815) 0:05:10.308 ****** 2025-02-04 09:16:05.856596 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:16:05.893712 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:05.928102 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:05.960169 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:05.999682 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:06.087353 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:06.087696 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:06.088585 | orchestrator | 2025-02-04 09:16:06.089118 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-02-04 09:16:06.089900 | orchestrator | Tuesday 04 February 2025 09:16:06 +0000 (0:00:00.302) 0:05:10.611 ****** 2025-02-04 09:16:06.188278 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:16:06.232195 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:06.270392 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:06.303535 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:06.545774 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:06.546590 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:06.546636 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:06.550305 | orchestrator | 2025-02-04 09:16:06.656983 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-02-04 09:16:06.657094 | orchestrator | Tuesday 04 February 2025 09:16:06 +0000 (0:00:00.457) 0:05:11.068 ****** 2025-02-04 09:16:06.657128 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:06.692752 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:16:06.733839 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:16:06.768134 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:16:06.854395 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:16:06.854728 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:16:06.855694 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:16:06.856213 | orchestrator | 2025-02-04 09:16:06.857206 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-02-04 09:16:06.964012 | orchestrator | Tuesday 04 February 2025 09:16:06 +0000 (0:00:00.308) 0:05:11.377 ****** 2025-02-04 09:16:06.964166 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:16:07.015736 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:07.166956 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:07.202112 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:07.284013 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:07.284766 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:07.285606 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:07.287102 | orchestrator | 2025-02-04 09:16:07.288011 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-02-04 09:16:07.288057 | orchestrator | Tuesday 04 February 2025 09:16:07 +0000 (0:00:00.427) 0:05:11.805 ****** 2025-02-04 09:16:07.424605 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:07.457974 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:16:07.503915 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:16:07.534219 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:16:07.623918 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:16:07.624564 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:16:07.631399 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:16:07.702595 | orchestrator | 2025-02-04 09:16:07.702716 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-02-04 09:16:07.702739 | orchestrator | Tuesday 04 February 2025 09:16:07 +0000 (0:00:00.341) 0:05:12.146 ****** 2025-02-04 09:16:07.702808 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:16:07.768576 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:07.803880 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:07.842264 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:07.877959 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:07.937714 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:07.938624 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:07.939411 | orchestrator | 2025-02-04 09:16:07.940544 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-02-04 09:16:07.941099 | orchestrator | Tuesday 04 February 2025 09:16:07 +0000 (0:00:00.313) 0:05:12.460 ****** 2025-02-04 09:16:08.012353 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:16:08.078360 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:08.110184 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:08.142977 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:08.200741 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:08.201107 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:08.201604 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:08.202081 | orchestrator | 2025-02-04 09:16:08.203064 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-02-04 09:16:08.737369 | orchestrator | Tuesday 04 February 2025 09:16:08 +0000 (0:00:00.263) 0:05:12.724 ****** 2025-02-04 09:16:08.737567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:16:08.737896 | orchestrator | 2025-02-04 09:16:08.738276 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-02-04 09:16:08.739340 | orchestrator | Tuesday 04 February 2025 09:16:08 +0000 (0:00:00.535) 0:05:13.259 ****** 2025-02-04 09:16:09.588191 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:16:09.588406 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:09.588833 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:16:09.589176 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:16:09.591842 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:16:09.591912 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:16:09.594533 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:16:09.596333 | orchestrator | 2025-02-04 09:16:09.597868 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-02-04 09:16:09.598551 | orchestrator | Tuesday 04 February 2025 09:16:09 +0000 (0:00:00.850) 0:05:14.109 ****** 2025-02-04 09:16:12.457173 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:16:12.457595 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:16:12.457636 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:16:12.461292 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:16:12.462097 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:12.463063 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:16:12.463209 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:16:12.464052 | orchestrator | 2025-02-04 09:16:12.464580 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-02-04 09:16:12.464983 | orchestrator | Tuesday 04 February 2025 09:16:12 +0000 (0:00:02.868) 0:05:16.978 ****** 2025-02-04 09:16:12.533796 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-02-04 09:16:12.636213 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-02-04 09:16:12.636364 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-02-04 09:16:12.637273 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-02-04 09:16:12.640736 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-02-04 09:16:12.641967 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-02-04 09:16:12.880812 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:16:12.880983 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-02-04 09:16:12.881524 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-02-04 09:16:12.961082 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-02-04 09:16:12.961846 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:12.962109 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-02-04 09:16:12.962674 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-02-04 09:16:13.058340 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-02-04 09:16:13.060009 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:13.061187 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-02-04 09:16:13.061365 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-02-04 09:16:13.063443 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-02-04 09:16:13.143710 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:13.146404 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-02-04 09:16:13.285427 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-02-04 09:16:13.285555 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-02-04 09:16:13.285575 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:13.285607 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:13.289123 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-02-04 09:16:13.290289 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-02-04 09:16:13.290334 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-02-04 09:16:13.293063 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:13.294721 | orchestrator | 2025-02-04 09:16:13.295113 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-02-04 09:16:13.295810 | orchestrator | Tuesday 04 February 2025 09:16:13 +0000 (0:00:00.828) 0:05:17.806 ****** 2025-02-04 09:16:20.132381 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:20.132724 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:20.132763 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:20.133003 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:20.133037 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:20.133235 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:20.134232 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:20.134358 | orchestrator | 2025-02-04 09:16:20.137508 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-02-04 09:16:21.265731 | orchestrator | Tuesday 04 February 2025 09:16:20 +0000 (0:00:06.844) 0:05:24.651 ****** 2025-02-04 09:16:21.265838 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:21.266104 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:21.267151 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:21.267730 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:21.268389 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:21.269317 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:21.270153 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:21.271035 | orchestrator | 2025-02-04 09:16:21.271778 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-02-04 09:16:21.272420 | orchestrator | Tuesday 04 February 2025 09:16:21 +0000 (0:00:01.134) 0:05:25.785 ****** 2025-02-04 09:16:29.086549 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:29.090390 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:29.090460 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:29.091101 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:29.091163 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:29.091222 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:29.091657 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:29.093788 | orchestrator | 2025-02-04 09:16:29.096501 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-02-04 09:16:29.096922 | orchestrator | Tuesday 04 February 2025 09:16:29 +0000 (0:00:07.819) 0:05:33.605 ****** 2025-02-04 09:16:32.214603 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:32.214807 | orchestrator | changed: [testbed-manager] 2025-02-04 09:16:32.214835 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:32.214850 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:32.214864 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:32.214885 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:32.217509 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:32.217799 | orchestrator | 2025-02-04 09:16:32.218097 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-02-04 09:16:32.218132 | orchestrator | Tuesday 04 February 2025 09:16:32 +0000 (0:00:03.129) 0:05:36.735 ****** 2025-02-04 09:16:33.847126 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:33.848027 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:33.848094 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:33.848118 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:33.848181 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:33.849949 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:33.850393 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:33.854125 | orchestrator | 2025-02-04 09:16:33.854226 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-02-04 09:16:33.854874 | orchestrator | Tuesday 04 February 2025 09:16:33 +0000 (0:00:01.633) 0:05:38.368 ****** 2025-02-04 09:16:35.337954 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:35.338948 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:35.339792 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:35.340228 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:35.340701 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:35.342636 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:35.343380 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:35.345205 | orchestrator | 2025-02-04 09:16:35.585269 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-02-04 09:16:35.585398 | orchestrator | Tuesday 04 February 2025 09:16:35 +0000 (0:00:01.487) 0:05:39.855 ****** 2025-02-04 09:16:35.585456 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:16:35.667739 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:16:35.760111 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:16:35.834128 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:16:36.049136 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:16:36.049593 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:16:36.050985 | orchestrator | changed: [testbed-manager] 2025-02-04 09:16:36.052580 | orchestrator | 2025-02-04 09:16:46.022873 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-02-04 09:16:46.022999 | orchestrator | Tuesday 04 February 2025 09:16:36 +0000 (0:00:00.714) 0:05:40.569 ****** 2025-02-04 09:16:46.023034 | orchestrator | ok: [testbed-manager] 2025-02-04 09:16:46.023102 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:46.024164 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:46.026341 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:46.027364 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:46.028807 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:46.029262 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:46.030498 | orchestrator | 2025-02-04 09:16:46.031890 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-02-04 09:16:46.032082 | orchestrator | Tuesday 04 February 2025 09:16:46 +0000 (0:00:09.972) 0:05:50.542 ****** 2025-02-04 09:16:47.212568 | orchestrator | changed: [testbed-manager] 2025-02-04 09:16:47.213202 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:16:47.213255 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:16:47.214126 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:16:47.215324 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:16:47.217428 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:16:47.221613 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:16:47.221676 | orchestrator | 2025-02-04 09:16:47.222648 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-02-04 09:16:47.223428 | orchestrator | Tuesday 04 February 2025 09:16:47 +0000 (0:00:01.188) 0:05:51.731 ****** 2025-02-04 09:17:00.252359 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:00.253049 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:00.253244 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:00.253298 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:00.253417 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:00.255260 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:00.255792 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:00.256165 | orchestrator | 2025-02-04 09:17:00.256967 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-02-04 09:17:00.257322 | orchestrator | Tuesday 04 February 2025 09:17:00 +0000 (0:00:13.036) 0:06:04.768 ****** 2025-02-04 09:17:13.589595 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:13.589972 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:13.590057 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:13.591825 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:13.592771 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:13.594364 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:13.594855 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:13.596230 | orchestrator | 2025-02-04 09:17:13.597142 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-02-04 09:17:13.598341 | orchestrator | Tuesday 04 February 2025 09:17:13 +0000 (0:00:13.340) 0:06:18.109 ****** 2025-02-04 09:17:14.063358 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-02-04 09:17:14.172811 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-02-04 09:17:14.173292 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-02-04 09:17:15.014388 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-02-04 09:17:15.015016 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-02-04 09:17:15.016223 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-02-04 09:17:15.017496 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-02-04 09:17:15.017890 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-02-04 09:17:15.020066 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-02-04 09:17:15.020789 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-02-04 09:17:15.020834 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-02-04 09:17:15.021504 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-02-04 09:17:15.022211 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-02-04 09:17:15.022679 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-02-04 09:17:15.023267 | orchestrator | 2025-02-04 09:17:15.024064 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-02-04 09:17:15.161278 | orchestrator | Tuesday 04 February 2025 09:17:14 +0000 (0:00:01.420) 0:06:19.530 ****** 2025-02-04 09:17:15.161347 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:15.243535 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:15.315040 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:15.390674 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:15.459888 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:15.601709 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:15.602203 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:15.607072 | orchestrator | 2025-02-04 09:17:19.693087 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-02-04 09:17:19.693206 | orchestrator | Tuesday 04 February 2025 09:17:15 +0000 (0:00:00.591) 0:06:20.121 ****** 2025-02-04 09:17:19.693234 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:19.695827 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:19.697032 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:19.697044 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:19.697051 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:19.702876 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:19.704605 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:19.704634 | orchestrator | 2025-02-04 09:17:19.704885 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-02-04 09:17:19.706268 | orchestrator | Tuesday 04 February 2025 09:17:19 +0000 (0:00:04.089) 0:06:24.211 ****** 2025-02-04 09:17:19.834000 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:19.905104 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:19.975155 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:20.056118 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:20.134071 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:20.243692 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:20.244079 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:20.244840 | orchestrator | 2025-02-04 09:17:20.245725 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-02-04 09:17:20.246626 | orchestrator | Tuesday 04 February 2025 09:17:20 +0000 (0:00:00.551) 0:06:24.762 ****** 2025-02-04 09:17:20.330807 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-02-04 09:17:20.331016 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-02-04 09:17:20.426462 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:20.426721 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-02-04 09:17:20.427812 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-02-04 09:17:20.506093 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:20.507080 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-02-04 09:17:20.508144 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-02-04 09:17:20.589125 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:20.590172 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-02-04 09:17:20.590608 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-02-04 09:17:20.676821 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:20.677059 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-02-04 09:17:20.678220 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-02-04 09:17:20.752233 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:20.753183 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-02-04 09:17:20.753570 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-02-04 09:17:20.870877 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:20.871402 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-02-04 09:17:20.872061 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-02-04 09:17:20.873101 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:20.873888 | orchestrator | 2025-02-04 09:17:20.876365 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-02-04 09:17:21.011108 | orchestrator | Tuesday 04 February 2025 09:17:20 +0000 (0:00:00.629) 0:06:25.392 ****** 2025-02-04 09:17:21.011238 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:21.085757 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:21.160972 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:21.230411 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:21.307449 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:21.425129 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:21.425327 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:21.425897 | orchestrator | 2025-02-04 09:17:21.426643 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-02-04 09:17:21.426758 | orchestrator | Tuesday 04 February 2025 09:17:21 +0000 (0:00:00.556) 0:06:25.948 ****** 2025-02-04 09:17:21.563258 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:21.635993 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:21.702127 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:21.772991 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:21.840650 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:21.932722 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:21.933134 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:21.934011 | orchestrator | 2025-02-04 09:17:21.934993 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-02-04 09:17:21.935528 | orchestrator | Tuesday 04 February 2025 09:17:21 +0000 (0:00:00.504) 0:06:26.453 ****** 2025-02-04 09:17:22.334925 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:22.399344 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:22.470317 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:22.538392 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:22.679036 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:22.679270 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:22.680806 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:22.681182 | orchestrator | 2025-02-04 09:17:22.682912 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-02-04 09:17:22.683513 | orchestrator | Tuesday 04 February 2025 09:17:22 +0000 (0:00:00.746) 0:06:27.199 ****** 2025-02-04 09:17:28.846246 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:28.849871 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:28.849913 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:28.849938 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:28.851744 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:28.851774 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:28.851789 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:28.851804 | orchestrator | 2025-02-04 09:17:28.851827 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-02-04 09:17:28.855999 | orchestrator | Tuesday 04 February 2025 09:17:28 +0000 (0:00:06.163) 0:06:33.363 ****** 2025-02-04 09:17:29.796355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:17:29.796542 | orchestrator | 2025-02-04 09:17:29.797626 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-02-04 09:17:29.798365 | orchestrator | Tuesday 04 February 2025 09:17:29 +0000 (0:00:00.952) 0:06:34.316 ****** 2025-02-04 09:17:30.266121 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:30.909036 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:30.911251 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:30.912615 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:30.912650 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:30.913666 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:30.914918 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:30.915572 | orchestrator | 2025-02-04 09:17:30.916659 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-02-04 09:17:30.917362 | orchestrator | Tuesday 04 February 2025 09:17:30 +0000 (0:00:01.112) 0:06:35.428 ****** 2025-02-04 09:17:31.871710 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:31.871856 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:31.872554 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:31.873851 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:31.874127 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:31.878140 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:31.878253 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:31.879103 | orchestrator | 2025-02-04 09:17:31.879815 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-02-04 09:17:31.880446 | orchestrator | Tuesday 04 February 2025 09:17:31 +0000 (0:00:00.960) 0:06:36.388 ****** 2025-02-04 09:17:33.326299 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:33.327015 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:33.330804 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:33.331635 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:33.332257 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:33.332552 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:33.333345 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:33.334248 | orchestrator | 2025-02-04 09:17:33.334732 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-02-04 09:17:33.337091 | orchestrator | Tuesday 04 February 2025 09:17:33 +0000 (0:00:01.457) 0:06:37.845 ****** 2025-02-04 09:17:33.471449 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:34.945083 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:34.948277 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:34.949656 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:17:34.949792 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:34.949838 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:17:34.949951 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:17:34.951864 | orchestrator | 2025-02-04 09:17:34.953032 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-02-04 09:17:34.953941 | orchestrator | Tuesday 04 February 2025 09:17:34 +0000 (0:00:01.619) 0:06:39.465 ****** 2025-02-04 09:17:36.307361 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:36.307796 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:36.312432 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:36.312691 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:36.313644 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:36.314428 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:36.314935 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:36.316951 | orchestrator | 2025-02-04 09:17:36.320750 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-02-04 09:17:36.321388 | orchestrator | Tuesday 04 February 2025 09:17:36 +0000 (0:00:01.361) 0:06:40.826 ****** 2025-02-04 09:17:38.247258 | orchestrator | changed: [testbed-manager] 2025-02-04 09:17:38.248139 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:38.248191 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:38.248229 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:38.248792 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:38.249539 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:38.251767 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:38.251878 | orchestrator | 2025-02-04 09:17:38.254653 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-02-04 09:17:38.256738 | orchestrator | Tuesday 04 February 2025 09:17:38 +0000 (0:00:01.940) 0:06:42.767 ****** 2025-02-04 09:17:39.226612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:17:39.226764 | orchestrator | 2025-02-04 09:17:39.227636 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-02-04 09:17:39.228845 | orchestrator | Tuesday 04 February 2025 09:17:39 +0000 (0:00:00.977) 0:06:43.745 ****** 2025-02-04 09:17:40.836887 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:40.837399 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:40.838193 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:40.839596 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:40.840051 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:17:40.843161 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:17:40.843703 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:17:40.844158 | orchestrator | 2025-02-04 09:17:40.845214 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-02-04 09:17:40.846168 | orchestrator | Tuesday 04 February 2025 09:17:40 +0000 (0:00:01.613) 0:06:45.358 ****** 2025-02-04 09:17:42.991766 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:42.994278 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:42.995664 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:42.995692 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:42.996270 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:17:42.997364 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:17:42.998129 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:17:42.998885 | orchestrator | 2025-02-04 09:17:42.999774 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-02-04 09:17:43.000035 | orchestrator | Tuesday 04 February 2025 09:17:42 +0000 (0:00:02.152) 0:06:47.511 ****** 2025-02-04 09:17:44.487046 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:44.487998 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:44.488973 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:44.489820 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:44.490798 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:17:44.491844 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:17:44.493345 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:17:44.494374 | orchestrator | 2025-02-04 09:17:44.495694 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-02-04 09:17:44.496704 | orchestrator | Tuesday 04 February 2025 09:17:44 +0000 (0:00:01.494) 0:06:49.005 ****** 2025-02-04 09:17:45.725426 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:45.725856 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:45.725909 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:45.726738 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:45.727160 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:17:45.727647 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:17:45.728557 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:17:45.729794 | orchestrator | 2025-02-04 09:17:45.729880 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-02-04 09:17:45.729904 | orchestrator | Tuesday 04 February 2025 09:17:45 +0000 (0:00:01.241) 0:06:50.247 ****** 2025-02-04 09:17:47.168683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:17:47.171536 | orchestrator | 2025-02-04 09:17:47.172295 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.172342 | orchestrator | Tuesday 04 February 2025 09:17:46 +0000 (0:00:00.913) 0:06:51.160 ****** 2025-02-04 09:17:47.176477 | orchestrator | 2025-02-04 09:17:47.181456 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.181558 | orchestrator | Tuesday 04 February 2025 09:17:46 +0000 (0:00:00.039) 0:06:51.200 ****** 2025-02-04 09:17:47.182313 | orchestrator | 2025-02-04 09:17:47.182397 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.182428 | orchestrator | Tuesday 04 February 2025 09:17:46 +0000 (0:00:00.047) 0:06:51.247 ****** 2025-02-04 09:17:47.182531 | orchestrator | 2025-02-04 09:17:47.182553 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.183082 | orchestrator | Tuesday 04 February 2025 09:17:46 +0000 (0:00:00.049) 0:06:51.297 ****** 2025-02-04 09:17:47.184451 | orchestrator | 2025-02-04 09:17:47.184476 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.187685 | orchestrator | Tuesday 04 February 2025 09:17:46 +0000 (0:00:00.049) 0:06:51.346 ****** 2025-02-04 09:17:47.193243 | orchestrator | 2025-02-04 09:17:47.193907 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.193940 | orchestrator | Tuesday 04 February 2025 09:17:46 +0000 (0:00:00.054) 0:06:51.400 ****** 2025-02-04 09:17:47.193958 | orchestrator | 2025-02-04 09:17:47.193981 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-02-04 09:17:47.194292 | orchestrator | Tuesday 04 February 2025 09:17:47 +0000 (0:00:00.242) 0:06:51.643 ****** 2025-02-04 09:17:47.195251 | orchestrator | 2025-02-04 09:17:47.195514 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-02-04 09:17:47.196419 | orchestrator | Tuesday 04 February 2025 09:17:47 +0000 (0:00:00.042) 0:06:51.685 ****** 2025-02-04 09:17:48.414589 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:48.418101 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:48.418286 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:48.418966 | orchestrator | 2025-02-04 09:17:48.420539 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-02-04 09:17:49.782273 | orchestrator | Tuesday 04 February 2025 09:17:48 +0000 (0:00:01.247) 0:06:52.933 ****** 2025-02-04 09:17:49.782414 | orchestrator | changed: [testbed-manager] 2025-02-04 09:17:49.782736 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:49.782779 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:49.782814 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:49.782888 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:49.783313 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:49.783581 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:49.784056 | orchestrator | 2025-02-04 09:17:49.784306 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-02-04 09:17:49.785124 | orchestrator | Tuesday 04 February 2025 09:17:49 +0000 (0:00:01.367) 0:06:54.301 ****** 2025-02-04 09:17:50.945865 | orchestrator | changed: [testbed-manager] 2025-02-04 09:17:50.946766 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:50.947336 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:50.948184 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:50.948570 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:50.949647 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:50.949933 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:50.950688 | orchestrator | 2025-02-04 09:17:50.951082 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-02-04 09:17:50.951741 | orchestrator | Tuesday 04 February 2025 09:17:50 +0000 (0:00:01.162) 0:06:55.464 ****** 2025-02-04 09:17:51.100096 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:53.255150 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:53.255811 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:53.255920 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:53.255937 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:53.255959 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:53.256013 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:53.256030 | orchestrator | 2025-02-04 09:17:53.257179 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-02-04 09:17:53.257271 | orchestrator | Tuesday 04 February 2025 09:17:53 +0000 (0:00:02.309) 0:06:57.773 ****** 2025-02-04 09:17:53.361723 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:53.361884 | orchestrator | 2025-02-04 09:17:53.361907 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-02-04 09:17:53.361928 | orchestrator | Tuesday 04 February 2025 09:17:53 +0000 (0:00:00.111) 0:06:57.885 ****** 2025-02-04 09:17:54.722343 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:54.722646 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:17:54.722705 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:17:54.723417 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:17:54.723933 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:17:54.724844 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:17:54.724894 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:17:54.726782 | orchestrator | 2025-02-04 09:17:54.727475 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-02-04 09:17:54.727941 | orchestrator | Tuesday 04 February 2025 09:17:54 +0000 (0:00:01.356) 0:06:59.241 ****** 2025-02-04 09:17:54.889709 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:17:54.964628 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:17:55.034382 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:17:55.123359 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:17:55.192140 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:17:55.351664 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:17:55.351905 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:17:55.351940 | orchestrator | 2025-02-04 09:17:55.352861 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-02-04 09:17:55.353689 | orchestrator | Tuesday 04 February 2025 09:17:55 +0000 (0:00:00.627) 0:06:59.868 ****** 2025-02-04 09:17:56.323042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:17:56.323221 | orchestrator | 2025-02-04 09:17:56.324150 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-02-04 09:17:57.273848 | orchestrator | Tuesday 04 February 2025 09:17:56 +0000 (0:00:00.977) 0:07:00.846 ****** 2025-02-04 09:17:57.273989 | orchestrator | ok: [testbed-manager] 2025-02-04 09:17:57.274258 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:17:57.274293 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:17:57.274316 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:17:57.274341 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:17:57.274362 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:17:57.274393 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:17:57.275230 | orchestrator | 2025-02-04 09:17:57.276266 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-02-04 09:17:57.276819 | orchestrator | Tuesday 04 February 2025 09:17:57 +0000 (0:00:00.943) 0:07:01.789 ****** 2025-02-04 09:17:58.049040 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-02-04 09:18:00.251218 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-02-04 09:18:00.251919 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-02-04 09:18:00.253151 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-02-04 09:18:00.254281 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-02-04 09:18:00.255163 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-02-04 09:18:00.255670 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-02-04 09:18:00.258079 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-02-04 09:18:00.258718 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-02-04 09:18:00.259607 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-02-04 09:18:00.260359 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-02-04 09:18:00.261220 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-02-04 09:18:00.261626 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-02-04 09:18:00.262674 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-02-04 09:18:00.263208 | orchestrator | 2025-02-04 09:18:00.263740 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-02-04 09:18:00.264752 | orchestrator | Tuesday 04 February 2025 09:18:00 +0000 (0:00:02.980) 0:07:04.770 ****** 2025-02-04 09:18:00.408648 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:00.475307 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:00.550470 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:00.617412 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:00.691469 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:00.818848 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:00.820339 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:00.821606 | orchestrator | 2025-02-04 09:18:00.822796 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-02-04 09:18:00.823452 | orchestrator | Tuesday 04 February 2025 09:18:00 +0000 (0:00:00.569) 0:07:05.340 ****** 2025-02-04 09:18:01.790474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:18:01.790879 | orchestrator | 2025-02-04 09:18:01.791685 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-02-04 09:18:01.792275 | orchestrator | Tuesday 04 February 2025 09:18:01 +0000 (0:00:00.968) 0:07:06.309 ****** 2025-02-04 09:18:02.294422 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:02.687447 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:03.121808 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:03.122562 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:03.123377 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:03.127222 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:03.127575 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:03.128748 | orchestrator | 2025-02-04 09:18:03.129681 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-02-04 09:18:03.130624 | orchestrator | Tuesday 04 February 2025 09:18:03 +0000 (0:00:01.335) 0:07:07.644 ****** 2025-02-04 09:18:03.570453 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:03.646996 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:04.096808 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:04.097059 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:04.097097 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:04.098479 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:04.099052 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:04.099681 | orchestrator | 2025-02-04 09:18:04.099825 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-02-04 09:18:04.100584 | orchestrator | Tuesday 04 February 2025 09:18:04 +0000 (0:00:00.972) 0:07:08.617 ****** 2025-02-04 09:18:04.246347 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:04.313195 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:04.379943 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:04.454645 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:04.532255 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:04.663613 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:04.665360 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:04.666945 | orchestrator | 2025-02-04 09:18:04.667339 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-02-04 09:18:04.668762 | orchestrator | Tuesday 04 February 2025 09:18:04 +0000 (0:00:00.567) 0:07:09.184 ****** 2025-02-04 09:18:06.134632 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:06.135635 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:06.135706 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:06.136717 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:06.137834 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:06.139140 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:06.139917 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:06.140963 | orchestrator | 2025-02-04 09:18:06.141973 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-02-04 09:18:06.143180 | orchestrator | Tuesday 04 February 2025 09:18:06 +0000 (0:00:01.470) 0:07:10.655 ****** 2025-02-04 09:18:06.265738 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:06.336124 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:06.399299 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:06.465750 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:06.726915 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:06.837392 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:06.839683 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:06.843099 | orchestrator | 2025-02-04 09:18:06.844192 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-02-04 09:18:06.845174 | orchestrator | Tuesday 04 February 2025 09:18:06 +0000 (0:00:00.703) 0:07:11.358 ****** 2025-02-04 09:18:08.787992 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:08.788593 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:08.789068 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:08.790169 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:08.791440 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:08.792097 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:08.792691 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:08.793124 | orchestrator | 2025-02-04 09:18:08.793665 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-02-04 09:18:08.794605 | orchestrator | Tuesday 04 February 2025 09:18:08 +0000 (0:00:01.947) 0:07:13.305 ****** 2025-02-04 09:18:10.225167 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:10.225464 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:10.226013 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:10.227822 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:10.228390 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:10.229725 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:10.231458 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:10.232058 | orchestrator | 2025-02-04 09:18:10.232529 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-02-04 09:18:10.233268 | orchestrator | Tuesday 04 February 2025 09:18:10 +0000 (0:00:01.439) 0:07:14.745 ****** 2025-02-04 09:18:12.105352 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:12.105673 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:12.106284 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:12.106983 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:12.107032 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:12.107395 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:12.109296 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:12.110836 | orchestrator | 2025-02-04 09:18:12.110876 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-02-04 09:18:12.110933 | orchestrator | Tuesday 04 February 2025 09:18:12 +0000 (0:00:01.878) 0:07:16.624 ****** 2025-02-04 09:18:14.038398 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:14.038660 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:14.038688 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:14.038701 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:14.038714 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:14.038733 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:14.039073 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:14.039690 | orchestrator | 2025-02-04 09:18:14.040115 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-04 09:18:14.495768 | orchestrator | Tuesday 04 February 2025 09:18:14 +0000 (0:00:01.934) 0:07:18.558 ****** 2025-02-04 09:18:14.495905 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:14.954315 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:14.955362 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:14.956316 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:14.959446 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:15.115036 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:15.115123 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:15.115133 | orchestrator | 2025-02-04 09:18:15.115143 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-04 09:18:15.115152 | orchestrator | Tuesday 04 February 2025 09:18:14 +0000 (0:00:00.916) 0:07:19.474 ****** 2025-02-04 09:18:15.115173 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:15.198305 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:15.260789 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:15.338219 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:15.424338 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:15.864885 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:16.015027 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:16.015136 | orchestrator | 2025-02-04 09:18:16.015152 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-02-04 09:18:16.015165 | orchestrator | Tuesday 04 February 2025 09:18:15 +0000 (0:00:00.910) 0:07:20.385 ****** 2025-02-04 09:18:16.015190 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:16.106578 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:16.179437 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:16.257358 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:16.339551 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:16.464184 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:16.464405 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:16.465262 | orchestrator | 2025-02-04 09:18:16.466480 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-02-04 09:18:16.466905 | orchestrator | Tuesday 04 February 2025 09:18:16 +0000 (0:00:00.599) 0:07:20.984 ****** 2025-02-04 09:18:16.829291 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:16.928216 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:17.014838 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:17.104702 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:17.182300 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:17.340956 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:17.341102 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:17.341721 | orchestrator | 2025-02-04 09:18:17.342437 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-02-04 09:18:17.344379 | orchestrator | Tuesday 04 February 2025 09:18:17 +0000 (0:00:00.876) 0:07:21.861 ****** 2025-02-04 09:18:17.577159 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:17.685227 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:17.769222 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:17.841361 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:17.915057 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:18.047993 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:18.048575 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:18.049246 | orchestrator | 2025-02-04 09:18:18.052155 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-02-04 09:18:18.053126 | orchestrator | Tuesday 04 February 2025 09:18:18 +0000 (0:00:00.707) 0:07:22.568 ****** 2025-02-04 09:18:18.191853 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:18.273152 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:18.347976 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:18.415229 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:18.488335 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:18.595703 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:18.596616 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:18.596888 | orchestrator | 2025-02-04 09:18:18.597696 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-02-04 09:18:18.598451 | orchestrator | Tuesday 04 February 2025 09:18:18 +0000 (0:00:00.547) 0:07:23.116 ****** 2025-02-04 09:18:23.206939 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:23.207201 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:23.207253 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:23.207275 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:23.207951 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:23.208717 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:23.208967 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:23.209748 | orchestrator | 2025-02-04 09:18:23.210829 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-02-04 09:18:23.211129 | orchestrator | Tuesday 04 February 2025 09:18:23 +0000 (0:00:04.610) 0:07:27.727 ****** 2025-02-04 09:18:23.348902 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:23.424829 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:23.495713 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:23.803814 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:23.876537 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:24.002935 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:24.006648 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:24.006738 | orchestrator | 2025-02-04 09:18:24.928769 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-02-04 09:18:24.928888 | orchestrator | Tuesday 04 February 2025 09:18:23 +0000 (0:00:00.794) 0:07:28.521 ****** 2025-02-04 09:18:24.928926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:18:24.929003 | orchestrator | 2025-02-04 09:18:24.931860 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-02-04 09:18:26.876475 | orchestrator | Tuesday 04 February 2025 09:18:24 +0000 (0:00:00.928) 0:07:29.449 ****** 2025-02-04 09:18:26.876728 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:26.877019 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:26.877053 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:26.877878 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:26.879122 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:26.880727 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:26.881017 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:26.882102 | orchestrator | 2025-02-04 09:18:26.883223 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-02-04 09:18:26.883724 | orchestrator | Tuesday 04 February 2025 09:18:26 +0000 (0:00:01.946) 0:07:31.396 ****** 2025-02-04 09:18:28.164530 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:28.164687 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:28.164853 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:28.165425 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:28.166999 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:28.169670 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:28.170101 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:28.170144 | orchestrator | 2025-02-04 09:18:28.170657 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-02-04 09:18:28.171289 | orchestrator | Tuesday 04 February 2025 09:18:28 +0000 (0:00:01.287) 0:07:32.683 ****** 2025-02-04 09:18:28.661648 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:28.741285 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:29.454129 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:29.455252 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:29.455287 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:29.455303 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:29.455318 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:29.455339 | orchestrator | 2025-02-04 09:18:29.455718 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-02-04 09:18:29.456682 | orchestrator | Tuesday 04 February 2025 09:18:29 +0000 (0:00:01.287) 0:07:33.971 ****** 2025-02-04 09:18:31.466173 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.466862 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.466905 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.468360 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.468950 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.469790 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.470538 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-02-04 09:18:31.471012 | orchestrator | 2025-02-04 09:18:31.471042 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-02-04 09:18:31.471343 | orchestrator | Tuesday 04 February 2025 09:18:31 +0000 (0:00:02.014) 0:07:35.986 ****** 2025-02-04 09:18:32.603282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:18:32.603448 | orchestrator | 2025-02-04 09:18:32.605244 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-02-04 09:18:32.605898 | orchestrator | Tuesday 04 February 2025 09:18:32 +0000 (0:00:01.136) 0:07:37.123 ****** 2025-02-04 09:18:41.486467 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:41.486625 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:41.486641 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:41.486650 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:41.486658 | orchestrator | changed: [testbed-manager] 2025-02-04 09:18:41.486670 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:41.487397 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:41.487667 | orchestrator | 2025-02-04 09:18:41.487846 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-02-04 09:18:41.488127 | orchestrator | Tuesday 04 February 2025 09:18:41 +0000 (0:00:08.883) 0:07:46.006 ****** 2025-02-04 09:18:43.350681 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:43.352480 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:43.353193 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:43.353612 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:43.354107 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:43.354607 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:43.355051 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:43.355626 | orchestrator | 2025-02-04 09:18:43.356022 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-02-04 09:18:43.356637 | orchestrator | Tuesday 04 February 2025 09:18:43 +0000 (0:00:01.864) 0:07:47.871 ****** 2025-02-04 09:18:44.895357 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:44.895487 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:44.895559 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:44.895574 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:44.895593 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:44.895838 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:44.896187 | orchestrator | 2025-02-04 09:18:44.896702 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-02-04 09:18:44.897467 | orchestrator | Tuesday 04 February 2025 09:18:44 +0000 (0:00:01.541) 0:07:49.412 ****** 2025-02-04 09:18:46.215635 | orchestrator | changed: [testbed-manager] 2025-02-04 09:18:46.215862 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:46.215897 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:46.219466 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:46.220333 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:46.220362 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:46.220378 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:46.220395 | orchestrator | 2025-02-04 09:18:46.220413 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-02-04 09:18:46.220432 | orchestrator | 2025-02-04 09:18:46.220454 | orchestrator | TASK [Include hardening role] ************************************************** 2025-02-04 09:18:46.220840 | orchestrator | Tuesday 04 February 2025 09:18:46 +0000 (0:00:01.323) 0:07:50.736 ****** 2025-02-04 09:18:46.346006 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:46.423925 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:46.492687 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:46.571585 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:46.644964 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:46.811156 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:46.811894 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:46.812300 | orchestrator | 2025-02-04 09:18:46.813054 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-02-04 09:18:46.819265 | orchestrator | 2025-02-04 09:18:48.325670 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-02-04 09:18:48.325822 | orchestrator | Tuesday 04 February 2025 09:18:46 +0000 (0:00:00.595) 0:07:51.331 ****** 2025-02-04 09:18:48.325878 | orchestrator | changed: [testbed-manager] 2025-02-04 09:18:48.325977 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:48.329594 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:48.331797 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:48.333199 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:48.334232 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:48.334972 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:48.335778 | orchestrator | 2025-02-04 09:18:48.336302 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-02-04 09:18:48.337015 | orchestrator | Tuesday 04 February 2025 09:18:48 +0000 (0:00:01.510) 0:07:52.842 ****** 2025-02-04 09:18:50.179386 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:50.180255 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:50.184313 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:50.184364 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:50.185165 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:50.185546 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:50.186236 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:50.186661 | orchestrator | 2025-02-04 09:18:50.187494 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-02-04 09:18:50.187731 | orchestrator | Tuesday 04 February 2025 09:18:50 +0000 (0:00:01.857) 0:07:54.700 ****** 2025-02-04 09:18:50.316273 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:18:50.385885 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:18:50.466837 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:18:50.541804 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:18:50.613949 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:18:51.061622 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:18:51.062894 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:18:51.064918 | orchestrator | 2025-02-04 09:18:51.065276 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-02-04 09:18:51.066261 | orchestrator | Tuesday 04 February 2025 09:18:51 +0000 (0:00:00.874) 0:07:55.575 ****** 2025-02-04 09:18:52.509352 | orchestrator | changed: [testbed-manager] 2025-02-04 09:18:52.511427 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:52.514988 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:52.515021 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:52.515044 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:52.515582 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:52.516060 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:52.516323 | orchestrator | 2025-02-04 09:18:52.517182 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-02-04 09:18:52.518390 | orchestrator | 2025-02-04 09:18:52.519416 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-02-04 09:18:52.520253 | orchestrator | Tuesday 04 February 2025 09:18:52 +0000 (0:00:01.454) 0:07:57.029 ****** 2025-02-04 09:18:53.611172 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:18:53.612076 | orchestrator | 2025-02-04 09:18:53.615201 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-04 09:18:54.072865 | orchestrator | Tuesday 04 February 2025 09:18:53 +0000 (0:00:01.101) 0:07:58.130 ****** 2025-02-04 09:18:54.073043 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:54.593868 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:54.594088 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:54.594747 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:54.595150 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:54.595970 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:54.597420 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:54.597493 | orchestrator | 2025-02-04 09:18:54.597827 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-04 09:18:54.598708 | orchestrator | Tuesday 04 February 2025 09:18:54 +0000 (0:00:00.981) 0:07:59.112 ****** 2025-02-04 09:18:55.922361 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:55.922500 | orchestrator | changed: [testbed-manager] 2025-02-04 09:18:55.922556 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:55.925971 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:57.059831 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:57.059942 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:57.059979 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:57.059991 | orchestrator | 2025-02-04 09:18:57.060004 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-02-04 09:18:57.060016 | orchestrator | Tuesday 04 February 2025 09:18:55 +0000 (0:00:01.330) 0:08:00.443 ****** 2025-02-04 09:18:57.060040 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:18:57.060106 | orchestrator | 2025-02-04 09:18:57.060733 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-02-04 09:18:57.069501 | orchestrator | Tuesday 04 February 2025 09:18:57 +0000 (0:00:01.137) 0:08:01.580 ****** 2025-02-04 09:18:57.928159 | orchestrator | ok: [testbed-manager] 2025-02-04 09:18:57.931401 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:18:57.932052 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:18:57.932081 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:18:57.932101 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:18:57.932582 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:18:57.933623 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:18:57.934297 | orchestrator | 2025-02-04 09:18:57.935010 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-02-04 09:18:57.935982 | orchestrator | Tuesday 04 February 2025 09:18:57 +0000 (0:00:00.865) 0:08:02.446 ****** 2025-02-04 09:18:58.438249 | orchestrator | changed: [testbed-manager] 2025-02-04 09:18:59.174151 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:18:59.174333 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:18:59.174906 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:18:59.175837 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:18:59.176285 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:18:59.176969 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:18:59.177824 | orchestrator | 2025-02-04 09:18:59.177885 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:18:59.177942 | orchestrator | 2025-02-04 09:18:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:18:59.178111 | orchestrator | 2025-02-04 09:18:59 | INFO  | Please wait and do not abort execution. 2025-02-04 09:18:59.178146 | orchestrator | testbed-manager : ok=161  changed=39  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-02-04 09:18:59.178457 | orchestrator | testbed-node-0 : ok=169  changed=67  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-04 09:18:59.179979 | orchestrator | testbed-node-1 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-04 09:18:59.180999 | orchestrator | testbed-node-2 : ok=169  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-04 09:18:59.181227 | orchestrator | testbed-node-3 : ok=168  changed=64  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-04 09:18:59.182354 | orchestrator | testbed-node-4 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-04 09:18:59.182985 | orchestrator | testbed-node-5 : ok=168  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-02-04 09:18:59.183568 | orchestrator | 2025-02-04 09:18:59.184719 | orchestrator | 2025-02-04 09:18:59.185186 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:18:59.185230 | orchestrator | Tuesday 04 February 2025 09:18:59 +0000 (0:00:01.249) 0:08:03.695 ****** 2025-02-04 09:18:59.185444 | orchestrator | =============================================================================== 2025-02-04 09:18:59.186352 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.16s 2025-02-04 09:18:59.186592 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.41s 2025-02-04 09:18:59.187149 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.35s 2025-02-04 09:18:59.187902 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.61s 2025-02-04 09:18:59.189131 | orchestrator | osism.services.docker : Install docker package ------------------------- 13.34s 2025-02-04 09:18:59.189442 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 13.04s 2025-02-04 09:18:59.189853 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.74s 2025-02-04 09:18:59.190212 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.97s 2025-02-04 09:18:59.190719 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.91s 2025-02-04 09:18:59.190815 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.88s 2025-02-04 09:18:59.191321 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.19s 2025-02-04 09:18:59.191777 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.12s 2025-02-04 09:18:59.192743 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.07s 2025-02-04 09:18:59.193909 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.82s 2025-02-04 09:18:59.194826 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.84s 2025-02-04 09:18:59.195777 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 6.38s 2025-02-04 09:18:59.196234 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.16s 2025-02-04 09:18:59.197148 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.00s 2025-02-04 09:18:59.197532 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.99s 2025-02-04 09:18:59.198364 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.97s 2025-02-04 09:18:59.994772 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-02-04 09:19:01.636632 | orchestrator | + osism apply network 2025-02-04 09:19:01.636777 | orchestrator | 2025-02-04 09:19:01 | INFO  | Task 87563baa-b952-4040-8e69-dba62cf44018 (network) was prepared for execution. 2025-02-04 09:19:05.121902 | orchestrator | 2025-02-04 09:19:01 | INFO  | It takes a moment until task 87563baa-b952-4040-8e69-dba62cf44018 (network) has been started and output is visible here. 2025-02-04 09:19:05.122149 | orchestrator | 2025-02-04 09:19:05.122235 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-02-04 09:19:05.122261 | orchestrator | 2025-02-04 09:19:05.122885 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-02-04 09:19:05.126265 | orchestrator | Tuesday 04 February 2025 09:19:05 +0000 (0:00:00.241) 0:00:00.241 ****** 2025-02-04 09:19:05.223758 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-04 09:19:05.294307 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:05.386737 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:05.487215 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:05.570950 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:05.834449 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:05.992391 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:05.992583 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:05.992627 | orchestrator | 2025-02-04 09:19:05.992667 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-02-04 09:19:05.993027 | orchestrator | Tuesday 04 February 2025 09:19:05 +0000 (0:00:00.870) 0:00:01.112 ****** 2025-02-04 09:19:07.476929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:19:07.480190 | orchestrator | 2025-02-04 09:19:09.735757 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-02-04 09:19:09.735883 | orchestrator | Tuesday 04 February 2025 09:19:07 +0000 (0:00:01.482) 0:00:02.595 ****** 2025-02-04 09:19:09.735919 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:09.735992 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:09.736015 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:09.736337 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:09.736378 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:09.736929 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:09.737155 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:09.737546 | orchestrator | 2025-02-04 09:19:09.738289 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-02-04 09:19:09.738672 | orchestrator | Tuesday 04 February 2025 09:19:09 +0000 (0:00:02.262) 0:00:04.858 ****** 2025-02-04 09:19:11.669308 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:11.672215 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:11.672530 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:11.672561 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:11.672581 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:11.673204 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:11.674072 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:11.674955 | orchestrator | 2025-02-04 09:19:11.675674 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-02-04 09:19:11.676982 | orchestrator | Tuesday 04 February 2025 09:19:11 +0000 (0:00:01.932) 0:00:06.790 ****** 2025-02-04 09:19:12.475378 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-02-04 09:19:12.476030 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-02-04 09:19:12.477136 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-02-04 09:19:12.479879 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-02-04 09:19:12.480000 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-02-04 09:19:12.955043 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-02-04 09:19:12.955371 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-02-04 09:19:12.956338 | orchestrator | 2025-02-04 09:19:12.957668 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-02-04 09:19:12.958807 | orchestrator | Tuesday 04 February 2025 09:19:12 +0000 (0:00:01.281) 0:00:08.072 ****** 2025-02-04 09:19:14.902672 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:19:14.902859 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-04 09:19:14.905330 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-04 09:19:14.906894 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-04 09:19:14.906965 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-04 09:19:14.906995 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-04 09:19:14.909936 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-04 09:19:14.910967 | orchestrator | 2025-02-04 09:19:14.911753 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-02-04 09:19:14.912594 | orchestrator | Tuesday 04 February 2025 09:19:14 +0000 (0:00:01.951) 0:00:10.023 ****** 2025-02-04 09:19:16.618417 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:16.618801 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:19:16.619586 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:19:16.623363 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:19:16.623704 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:19:16.624341 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:19:16.624776 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:19:16.625368 | orchestrator | 2025-02-04 09:19:16.626067 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-02-04 09:19:16.626366 | orchestrator | Tuesday 04 February 2025 09:19:16 +0000 (0:00:01.713) 0:00:11.737 ****** 2025-02-04 09:19:17.246003 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-04 09:19:17.355967 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:19:17.887087 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-04 09:19:17.887321 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-04 09:19:17.887558 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-02-04 09:19:17.888343 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-02-04 09:19:17.889098 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-02-04 09:19:17.889156 | orchestrator | 2025-02-04 09:19:17.889217 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-02-04 09:19:17.889422 | orchestrator | Tuesday 04 February 2025 09:19:17 +0000 (0:00:01.271) 0:00:13.008 ****** 2025-02-04 09:19:18.412455 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:18.681029 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:19.181605 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:19.181775 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:19.181815 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:19.181831 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:19.181862 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:19.185057 | orchestrator | 2025-02-04 09:19:19.185163 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-02-04 09:19:19.358280 | orchestrator | Tuesday 04 February 2025 09:19:19 +0000 (0:00:01.294) 0:00:14.302 ****** 2025-02-04 09:19:19.358435 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:19:19.459103 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:19:19.558901 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:19:19.661731 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:19:19.910971 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:19:20.079855 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:19:20.080248 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:19:20.081910 | orchestrator | 2025-02-04 09:19:20.085804 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-02-04 09:19:20.086168 | orchestrator | Tuesday 04 February 2025 09:19:20 +0000 (0:00:00.894) 0:00:15.197 ****** 2025-02-04 09:19:22.171025 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:22.171713 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:22.171758 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:22.172737 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:22.173547 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:22.174252 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:22.177090 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:24.118118 | orchestrator | 2025-02-04 09:19:24.118223 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-02-04 09:19:24.118238 | orchestrator | Tuesday 04 February 2025 09:19:22 +0000 (0:00:02.096) 0:00:17.294 ****** 2025-02-04 09:19:24.118263 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.118361 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.122498 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.123896 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.123926 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.123939 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-02-04 09:19:24.123951 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.123963 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-02-04 09:19:24.123975 | orchestrator | 2025-02-04 09:19:24.123987 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-02-04 09:19:24.124003 | orchestrator | Tuesday 04 February 2025 09:19:24 +0000 (0:00:01.940) 0:00:19.234 ****** 2025-02-04 09:19:25.662012 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:25.662254 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:19:25.662283 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:19:25.662588 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:19:25.663312 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:19:25.664253 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:19:25.664929 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:19:25.665260 | orchestrator | 2025-02-04 09:19:25.666083 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-02-04 09:19:25.666710 | orchestrator | Tuesday 04 February 2025 09:19:25 +0000 (0:00:01.548) 0:00:20.783 ****** 2025-02-04 09:19:27.207511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:19:27.207748 | orchestrator | 2025-02-04 09:19:27.211154 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-02-04 09:19:27.884949 | orchestrator | Tuesday 04 February 2025 09:19:27 +0000 (0:00:01.542) 0:00:22.325 ****** 2025-02-04 09:19:27.885052 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:28.372329 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:28.373441 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:28.374313 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:28.374346 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:28.374368 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:28.374648 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:28.376008 | orchestrator | 2025-02-04 09:19:28.376160 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-02-04 09:19:28.380579 | orchestrator | Tuesday 04 February 2025 09:19:28 +0000 (0:00:01.164) 0:00:23.490 ****** 2025-02-04 09:19:28.554368 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:28.807205 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:19:28.910643 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:19:29.006161 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:19:29.111206 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:19:29.272315 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:19:29.273307 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:19:29.273945 | orchestrator | 2025-02-04 09:19:29.273979 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-02-04 09:19:29.274410 | orchestrator | Tuesday 04 February 2025 09:19:29 +0000 (0:00:00.897) 0:00:24.388 ****** 2025-02-04 09:19:29.681834 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:29.682455 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:29.808763 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:29.809825 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:29.912286 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:30.444908 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:30.445083 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:30.447111 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:30.447187 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:30.449222 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:30.449622 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:30.449727 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:30.449763 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-02-04 09:19:30.451294 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-02-04 09:19:30.452228 | orchestrator | 2025-02-04 09:19:30.452894 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-02-04 09:19:30.453703 | orchestrator | Tuesday 04 February 2025 09:19:30 +0000 (0:00:01.178) 0:00:25.566 ****** 2025-02-04 09:19:30.821515 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:19:30.907905 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:19:30.994974 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:19:31.094267 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:19:31.181707 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:19:31.314667 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:19:31.316026 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:19:31.316162 | orchestrator | 2025-02-04 09:19:31.316563 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-02-04 09:19:31.316903 | orchestrator | Tuesday 04 February 2025 09:19:31 +0000 (0:00:00.871) 0:00:26.438 ****** 2025-02-04 09:19:31.483344 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:19:31.567947 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:19:31.651173 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:19:31.925243 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:19:32.007104 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:19:33.369855 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:19:33.370923 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:19:33.370982 | orchestrator | 2025-02-04 09:19:33.371278 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-02-04 09:19:33.374124 | orchestrator | Tuesday 04 February 2025 09:19:33 +0000 (0:00:02.050) 0:00:28.488 ****** 2025-02-04 09:19:33.544903 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:19:33.621785 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:19:33.706744 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:19:33.784834 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:19:33.876270 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:19:33.924942 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:19:33.926340 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:19:33.928491 | orchestrator | 2025-02-04 09:19:33.929576 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:19:33.931064 | orchestrator | 2025-02-04 09:19:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:19:33.932061 | orchestrator | 2025-02-04 09:19:33 | INFO  | Please wait and do not abort execution. 2025-02-04 09:19:33.932126 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.933260 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.933964 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.934666 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.935849 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.936919 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.937294 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:19:33.938412 | orchestrator | 2025-02-04 09:19:33.938896 | orchestrator | 2025-02-04 09:19:33.939872 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:19:33.940020 | orchestrator | Tuesday 04 February 2025 09:19:33 +0000 (0:00:00.557) 0:00:29.046 ****** 2025-02-04 09:19:33.941037 | orchestrator | =============================================================================== 2025-02-04 09:19:33.941453 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.26s 2025-02-04 09:19:33.942300 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.10s 2025-02-04 09:19:33.942392 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 2.05s 2025-02-04 09:19:33.942934 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.95s 2025-02-04 09:19:33.943423 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.94s 2025-02-04 09:19:33.943854 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.93s 2025-02-04 09:19:33.944152 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2025-02-04 09:19:33.944687 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.55s 2025-02-04 09:19:33.945193 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.54s 2025-02-04 09:19:33.945707 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.48s 2025-02-04 09:19:33.946157 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.29s 2025-02-04 09:19:33.946394 | orchestrator | osism.commons.network : Create required directories --------------------- 1.28s 2025-02-04 09:19:33.947028 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.27s 2025-02-04 09:19:33.947273 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2025-02-04 09:19:33.947648 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-02-04 09:19:33.948026 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.90s 2025-02-04 09:19:33.948257 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.89s 2025-02-04 09:19:33.948805 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 0.87s 2025-02-04 09:19:33.949050 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.87s 2025-02-04 09:19:33.949382 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.56s 2025-02-04 09:19:34.671394 | orchestrator | + osism apply wireguard 2025-02-04 09:19:36.236731 | orchestrator | 2025-02-04 09:19:36 | INFO  | Task b41cf63c-2a4e-4d3e-85f4-d19e4bdc3414 (wireguard) was prepared for execution. 2025-02-04 09:19:39.624510 | orchestrator | 2025-02-04 09:19:36 | INFO  | It takes a moment until task b41cf63c-2a4e-4d3e-85f4-d19e4bdc3414 (wireguard) has been started and output is visible here. 2025-02-04 09:19:39.624704 | orchestrator | 2025-02-04 09:19:39.625374 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-02-04 09:19:39.625613 | orchestrator | 2025-02-04 09:19:39.626917 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-02-04 09:19:39.628728 | orchestrator | Tuesday 04 February 2025 09:19:39 +0000 (0:00:00.200) 0:00:00.200 ****** 2025-02-04 09:19:41.307189 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:41.308111 | orchestrator | 2025-02-04 09:19:41.308836 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-02-04 09:19:41.310000 | orchestrator | Tuesday 04 February 2025 09:19:41 +0000 (0:00:01.684) 0:00:01.885 ****** 2025-02-04 09:19:48.595095 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:48.595428 | orchestrator | 2025-02-04 09:19:48.596763 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-02-04 09:19:48.598615 | orchestrator | Tuesday 04 February 2025 09:19:48 +0000 (0:00:07.287) 0:00:09.173 ****** 2025-02-04 09:19:49.234594 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:49.234806 | orchestrator | 2025-02-04 09:19:49.234838 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-02-04 09:19:49.236492 | orchestrator | Tuesday 04 February 2025 09:19:49 +0000 (0:00:00.641) 0:00:09.814 ****** 2025-02-04 09:19:49.757938 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:49.758307 | orchestrator | 2025-02-04 09:19:49.760118 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-02-04 09:19:49.760719 | orchestrator | Tuesday 04 February 2025 09:19:49 +0000 (0:00:00.521) 0:00:10.336 ****** 2025-02-04 09:19:50.476758 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:50.478228 | orchestrator | 2025-02-04 09:19:50.478374 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-02-04 09:19:50.480416 | orchestrator | Tuesday 04 February 2025 09:19:50 +0000 (0:00:00.721) 0:00:11.057 ****** 2025-02-04 09:19:50.887786 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:50.888323 | orchestrator | 2025-02-04 09:19:50.888776 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-02-04 09:19:50.890001 | orchestrator | Tuesday 04 February 2025 09:19:50 +0000 (0:00:00.409) 0:00:11.467 ****** 2025-02-04 09:19:51.353155 | orchestrator | ok: [testbed-manager] 2025-02-04 09:19:51.354098 | orchestrator | 2025-02-04 09:19:51.354936 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-02-04 09:19:51.356679 | orchestrator | Tuesday 04 February 2025 09:19:51 +0000 (0:00:00.466) 0:00:11.934 ****** 2025-02-04 09:19:52.604307 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:52.604519 | orchestrator | 2025-02-04 09:19:52.606614 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-02-04 09:19:52.607021 | orchestrator | Tuesday 04 February 2025 09:19:52 +0000 (0:00:01.248) 0:00:13.182 ****** 2025-02-04 09:19:53.589918 | orchestrator | changed: [testbed-manager] => (item=None) 2025-02-04 09:19:53.590204 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:53.591906 | orchestrator | 2025-02-04 09:19:55.503808 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-02-04 09:19:55.503910 | orchestrator | Tuesday 04 February 2025 09:19:53 +0000 (0:00:00.986) 0:00:14.169 ****** 2025-02-04 09:19:55.503931 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:55.504035 | orchestrator | 2025-02-04 09:19:55.504229 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-02-04 09:19:55.505006 | orchestrator | Tuesday 04 February 2025 09:19:55 +0000 (0:00:01.913) 0:00:16.083 ****** 2025-02-04 09:19:56.597142 | orchestrator | changed: [testbed-manager] 2025-02-04 09:19:56.597889 | orchestrator | 2025-02-04 09:19:56.598192 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:19:56.598855 | orchestrator | 2025-02-04 09:19:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:19:56.599104 | orchestrator | 2025-02-04 09:19:56 | INFO  | Please wait and do not abort execution. 2025-02-04 09:19:56.599746 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:19:56.601314 | orchestrator | 2025-02-04 09:19:56.602265 | orchestrator | 2025-02-04 09:19:56.603399 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:19:56.604217 | orchestrator | Tuesday 04 February 2025 09:19:56 +0000 (0:00:01.092) 0:00:17.176 ****** 2025-02-04 09:19:56.604903 | orchestrator | =============================================================================== 2025-02-04 09:19:56.605252 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.29s 2025-02-04 09:19:56.606355 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.91s 2025-02-04 09:19:56.606785 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2025-02-04 09:19:56.607276 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2025-02-04 09:19:56.607782 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.09s 2025-02-04 09:19:56.608122 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2025-02-04 09:19:56.608556 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.72s 2025-02-04 09:19:56.609171 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.64s 2025-02-04 09:19:56.610160 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.52s 2025-02-04 09:19:56.610590 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.47s 2025-02-04 09:19:57.295405 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2025-02-04 09:19:57.295576 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-02-04 09:19:57.335605 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-02-04 09:19:57.410433 | orchestrator | Dload Upload Total Spent Left Speed 2025-02-04 09:19:57.410593 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 187 0 --:--:-- --:--:-- --:--:-- 189 2025-02-04 09:19:57.425409 | orchestrator | + osism apply --environment custom workarounds 2025-02-04 09:19:58.994702 | orchestrator | 2025-02-04 09:19:58 | INFO  | Trying to run play workarounds in environment custom 2025-02-04 09:19:59.046752 | orchestrator | 2025-02-04 09:19:59 | INFO  | Task d399d3ca-4cd7-4d18-8080-a487b873814d (workarounds) was prepared for execution. 2025-02-04 09:20:02.328178 | orchestrator | 2025-02-04 09:19:59 | INFO  | It takes a moment until task d399d3ca-4cd7-4d18-8080-a487b873814d (workarounds) has been started and output is visible here. 2025-02-04 09:20:02.328362 | orchestrator | 2025-02-04 09:20:02.328452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:20:02.328478 | orchestrator | 2025-02-04 09:20:02.330374 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-02-04 09:20:02.330413 | orchestrator | Tuesday 04 February 2025 09:20:02 +0000 (0:00:00.157) 0:00:00.157 ****** 2025-02-04 09:20:02.502093 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-02-04 09:20:02.593197 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-02-04 09:20:02.680798 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-02-04 09:20:02.765088 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-02-04 09:20:02.986968 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-02-04 09:20:03.141252 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-02-04 09:20:03.142507 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-02-04 09:20:03.144156 | orchestrator | 2025-02-04 09:20:03.144809 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-02-04 09:20:03.145432 | orchestrator | 2025-02-04 09:20:03.146196 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-04 09:20:03.146821 | orchestrator | Tuesday 04 February 2025 09:20:03 +0000 (0:00:00.817) 0:00:00.974 ****** 2025-02-04 09:20:05.941736 | orchestrator | ok: [testbed-manager] 2025-02-04 09:20:05.941991 | orchestrator | 2025-02-04 09:20:05.942800 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-02-04 09:20:05.948747 | orchestrator | 2025-02-04 09:20:05.948835 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-02-04 09:20:07.839712 | orchestrator | Tuesday 04 February 2025 09:20:05 +0000 (0:00:02.796) 0:00:03.770 ****** 2025-02-04 09:20:07.839932 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:20:07.840059 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:20:07.841110 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:20:07.841986 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:20:07.843071 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:20:07.843708 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:20:07.844104 | orchestrator | 2025-02-04 09:20:07.845185 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-02-04 09:20:07.845476 | orchestrator | 2025-02-04 09:20:07.846144 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-02-04 09:20:07.846729 | orchestrator | Tuesday 04 February 2025 09:20:07 +0000 (0:00:01.897) 0:00:05.668 ****** 2025-02-04 09:20:09.368498 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-04 09:20:09.370760 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-04 09:20:09.370851 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-04 09:20:09.370875 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-04 09:20:09.373049 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-04 09:20:09.373994 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-02-04 09:20:09.374734 | orchestrator | 2025-02-04 09:20:09.375351 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-02-04 09:20:09.376210 | orchestrator | Tuesday 04 February 2025 09:20:09 +0000 (0:00:01.529) 0:00:07.197 ****** 2025-02-04 09:20:12.516648 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:20:12.516958 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:20:12.518271 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:20:12.519133 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:20:12.519180 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:20:12.521658 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:20:12.522402 | orchestrator | 2025-02-04 09:20:12.522453 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-02-04 09:20:12.523198 | orchestrator | Tuesday 04 February 2025 09:20:12 +0000 (0:00:03.151) 0:00:10.349 ****** 2025-02-04 09:20:12.689167 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:20:12.771493 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:20:13.008401 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:20:13.091729 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:20:13.262100 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:20:13.262444 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:20:13.262705 | orchestrator | 2025-02-04 09:20:13.262743 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-02-04 09:20:13.264761 | orchestrator | 2025-02-04 09:20:13.268191 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-02-04 09:20:15.034320 | orchestrator | Tuesday 04 February 2025 09:20:13 +0000 (0:00:00.746) 0:00:11.095 ****** 2025-02-04 09:20:15.034461 | orchestrator | changed: [testbed-manager] 2025-02-04 09:20:15.035969 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:20:15.037039 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:20:15.039036 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:20:15.039469 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:20:15.041748 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:20:15.043441 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:20:15.044283 | orchestrator | 2025-02-04 09:20:15.044981 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-02-04 09:20:15.045390 | orchestrator | Tuesday 04 February 2025 09:20:15 +0000 (0:00:01.769) 0:00:12.865 ****** 2025-02-04 09:20:16.732646 | orchestrator | changed: [testbed-manager] 2025-02-04 09:20:16.733373 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:20:16.733414 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:20:16.733438 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:20:16.734198 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:20:16.734705 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:20:16.735398 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:20:16.735818 | orchestrator | 2025-02-04 09:20:16.736278 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-02-04 09:20:16.737107 | orchestrator | Tuesday 04 February 2025 09:20:16 +0000 (0:00:01.691) 0:00:14.557 ****** 2025-02-04 09:20:18.572107 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:20:18.574278 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:20:18.574311 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:20:18.574409 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:20:18.580963 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:20:20.248470 | orchestrator | ok: [testbed-manager] 2025-02-04 09:20:20.248634 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:20:20.248657 | orchestrator | 2025-02-04 09:20:20.248675 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-02-04 09:20:20.248692 | orchestrator | Tuesday 04 February 2025 09:20:18 +0000 (0:00:01.844) 0:00:16.401 ****** 2025-02-04 09:20:20.248726 | orchestrator | changed: [testbed-manager] 2025-02-04 09:20:20.251190 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:20:20.251273 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:20:20.251292 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:20:20.251306 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:20:20.251321 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:20:20.251339 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:20:20.252221 | orchestrator | 2025-02-04 09:20:20.252701 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-02-04 09:20:20.253584 | orchestrator | Tuesday 04 February 2025 09:20:20 +0000 (0:00:01.677) 0:00:18.079 ****** 2025-02-04 09:20:20.442592 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:20:20.586195 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:20:20.700520 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:20:20.933715 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:20:21.016977 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:20:21.160600 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:20:21.160998 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:20:21.161908 | orchestrator | 2025-02-04 09:20:21.162931 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-02-04 09:20:21.163702 | orchestrator | 2025-02-04 09:20:21.164724 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-02-04 09:20:21.165125 | orchestrator | Tuesday 04 February 2025 09:20:21 +0000 (0:00:00.911) 0:00:18.991 ****** 2025-02-04 09:20:23.851063 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:20:23.851298 | orchestrator | ok: [testbed-manager] 2025-02-04 09:20:23.851886 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:20:23.852519 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:20:23.853092 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:20:23.853959 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:20:23.854253 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:20:23.854754 | orchestrator | 2025-02-04 09:20:23.855226 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:20:23.855661 | orchestrator | 2025-02-04 09:20:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:20:23.855925 | orchestrator | 2025-02-04 09:20:23 | INFO  | Please wait and do not abort execution. 2025-02-04 09:20:23.856609 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:20:23.857276 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:23.857578 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:23.858140 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:23.858464 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:23.859270 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:23.859753 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:23.860029 | orchestrator | 2025-02-04 09:20:23.860432 | orchestrator | 2025-02-04 09:20:23.860664 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:20:23.861027 | orchestrator | Tuesday 04 February 2025 09:20:23 +0000 (0:00:02.691) 0:00:21.682 ****** 2025-02-04 09:20:23.861307 | orchestrator | =============================================================================== 2025-02-04 09:20:23.862875 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.15s 2025-02-04 09:20:23.863318 | orchestrator | Apply netplan configuration --------------------------------------------- 2.80s 2025-02-04 09:20:23.863933 | orchestrator | Install python3-docker -------------------------------------------------- 2.69s 2025-02-04 09:20:23.864427 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2025-02-04 09:20:23.864659 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.84s 2025-02-04 09:20:23.864946 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.77s 2025-02-04 09:20:23.865120 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2025-02-04 09:20:23.865448 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.68s 2025-02-04 09:20:23.865790 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-02-04 09:20:23.866238 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.91s 2025-02-04 09:20:23.866596 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2025-02-04 09:20:23.867372 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2025-02-04 09:20:24.515657 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-02-04 09:20:26.071977 | orchestrator | 2025-02-04 09:20:26 | INFO  | Task 68bd0a3c-1f4e-499e-b5a2-983a645acc67 (reboot) was prepared for execution. 2025-02-04 09:20:29.416354 | orchestrator | 2025-02-04 09:20:26 | INFO  | It takes a moment until task 68bd0a3c-1f4e-499e-b5a2-983a645acc67 (reboot) has been started and output is visible here. 2025-02-04 09:20:29.416473 | orchestrator | 2025-02-04 09:20:29.417351 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-04 09:20:29.418729 | orchestrator | 2025-02-04 09:20:29.421158 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-04 09:20:29.422151 | orchestrator | Tuesday 04 February 2025 09:20:29 +0000 (0:00:00.166) 0:00:00.166 ****** 2025-02-04 09:20:29.515448 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:20:29.517132 | orchestrator | 2025-02-04 09:20:29.518250 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-04 09:20:29.518270 | orchestrator | Tuesday 04 February 2025 09:20:29 +0000 (0:00:00.101) 0:00:00.268 ****** 2025-02-04 09:20:30.514912 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:20:30.625482 | orchestrator | 2025-02-04 09:20:30.625656 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-04 09:20:30.625680 | orchestrator | Tuesday 04 February 2025 09:20:30 +0000 (0:00:00.998) 0:00:01.266 ****** 2025-02-04 09:20:30.625712 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:20:30.625975 | orchestrator | 2025-02-04 09:20:30.626009 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-04 09:20:30.626203 | orchestrator | 2025-02-04 09:20:30.626718 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-04 09:20:30.627102 | orchestrator | Tuesday 04 February 2025 09:20:30 +0000 (0:00:00.109) 0:00:01.376 ****** 2025-02-04 09:20:30.732438 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:20:30.732886 | orchestrator | 2025-02-04 09:20:30.732928 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-04 09:20:30.733356 | orchestrator | Tuesday 04 February 2025 09:20:30 +0000 (0:00:00.109) 0:00:01.486 ****** 2025-02-04 09:20:31.418904 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:20:31.419060 | orchestrator | 2025-02-04 09:20:31.419241 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-04 09:20:31.419988 | orchestrator | Tuesday 04 February 2025 09:20:31 +0000 (0:00:00.686) 0:00:02.173 ****** 2025-02-04 09:20:31.548757 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:20:31.549380 | orchestrator | 2025-02-04 09:20:31.549427 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-04 09:20:31.550180 | orchestrator | 2025-02-04 09:20:31.550973 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-04 09:20:31.551191 | orchestrator | Tuesday 04 February 2025 09:20:31 +0000 (0:00:00.122) 0:00:02.295 ****** 2025-02-04 09:20:31.792080 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:20:31.792207 | orchestrator | 2025-02-04 09:20:31.792994 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-04 09:20:31.793385 | orchestrator | Tuesday 04 February 2025 09:20:31 +0000 (0:00:00.250) 0:00:02.546 ****** 2025-02-04 09:20:32.494938 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:20:32.495138 | orchestrator | 2025-02-04 09:20:32.495486 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-04 09:20:32.495826 | orchestrator | Tuesday 04 February 2025 09:20:32 +0000 (0:00:00.700) 0:00:03.246 ****** 2025-02-04 09:20:32.616083 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:20:32.616245 | orchestrator | 2025-02-04 09:20:32.616294 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-04 09:20:32.616596 | orchestrator | 2025-02-04 09:20:32.616983 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-04 09:20:32.617792 | orchestrator | Tuesday 04 February 2025 09:20:32 +0000 (0:00:00.121) 0:00:03.367 ****** 2025-02-04 09:20:32.725845 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:20:32.726167 | orchestrator | 2025-02-04 09:20:32.726863 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-04 09:20:32.727180 | orchestrator | Tuesday 04 February 2025 09:20:32 +0000 (0:00:00.111) 0:00:03.478 ****** 2025-02-04 09:20:33.509821 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:20:33.510802 | orchestrator | 2025-02-04 09:20:33.512326 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-04 09:20:33.513405 | orchestrator | Tuesday 04 February 2025 09:20:33 +0000 (0:00:00.782) 0:00:04.261 ****** 2025-02-04 09:20:33.632718 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:20:33.634910 | orchestrator | 2025-02-04 09:20:33.636493 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-04 09:20:33.637353 | orchestrator | 2025-02-04 09:20:33.638355 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-04 09:20:33.639683 | orchestrator | Tuesday 04 February 2025 09:20:33 +0000 (0:00:00.119) 0:00:04.381 ****** 2025-02-04 09:20:33.749422 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:20:33.750344 | orchestrator | 2025-02-04 09:20:33.750409 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-04 09:20:33.751757 | orchestrator | Tuesday 04 February 2025 09:20:33 +0000 (0:00:00.120) 0:00:04.501 ****** 2025-02-04 09:20:34.446706 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:20:34.447175 | orchestrator | 2025-02-04 09:20:34.447658 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-04 09:20:34.448542 | orchestrator | Tuesday 04 February 2025 09:20:34 +0000 (0:00:00.698) 0:00:05.200 ****** 2025-02-04 09:20:34.587063 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:20:34.587906 | orchestrator | 2025-02-04 09:20:34.587980 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-02-04 09:20:34.588317 | orchestrator | 2025-02-04 09:20:34.589411 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-02-04 09:20:34.591359 | orchestrator | Tuesday 04 February 2025 09:20:34 +0000 (0:00:00.140) 0:00:05.340 ****** 2025-02-04 09:20:34.698739 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:20:34.699314 | orchestrator | 2025-02-04 09:20:34.701933 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-02-04 09:20:34.703392 | orchestrator | Tuesday 04 February 2025 09:20:34 +0000 (0:00:00.111) 0:00:05.452 ****** 2025-02-04 09:20:35.355326 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:20:35.356217 | orchestrator | 2025-02-04 09:20:35.357164 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-02-04 09:20:35.357995 | orchestrator | Tuesday 04 February 2025 09:20:35 +0000 (0:00:00.656) 0:00:06.108 ****** 2025-02-04 09:20:35.389318 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:20:35.390189 | orchestrator | 2025-02-04 09:20:35.391457 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:20:35.392229 | orchestrator | 2025-02-04 09:20:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:20:35.392750 | orchestrator | 2025-02-04 09:20:35 | INFO  | Please wait and do not abort execution. 2025-02-04 09:20:35.394127 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:35.394617 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:35.395218 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:35.395514 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:35.397506 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:35.402437 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:20:35.402537 | orchestrator | 2025-02-04 09:20:35.403791 | orchestrator | 2025-02-04 09:20:35.404572 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:20:35.405070 | orchestrator | Tuesday 04 February 2025 09:20:35 +0000 (0:00:00.034) 0:00:06.143 ****** 2025-02-04 09:20:35.405620 | orchestrator | =============================================================================== 2025-02-04 09:20:35.405912 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.52s 2025-02-04 09:20:35.406884 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2025-02-04 09:20:35.966684 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-02-04 09:20:35.966773 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-02-04 09:20:37.571411 | orchestrator | 2025-02-04 09:20:37 | INFO  | Task 59a3fc12-8b70-4109-a8e1-e8f7ff67ae05 (wait-for-connection) was prepared for execution. 2025-02-04 09:20:40.933333 | orchestrator | 2025-02-04 09:20:37 | INFO  | It takes a moment until task 59a3fc12-8b70-4109-a8e1-e8f7ff67ae05 (wait-for-connection) has been started and output is visible here. 2025-02-04 09:20:40.933431 | orchestrator | 2025-02-04 09:20:40.933843 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-02-04 09:20:40.934685 | orchestrator | 2025-02-04 09:20:40.936763 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-02-04 09:20:53.533629 | orchestrator | Tuesday 04 February 2025 09:20:40 +0000 (0:00:00.199) 0:00:00.199 ****** 2025-02-04 09:20:53.533808 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:20:53.534541 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:20:53.534924 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:20:53.534952 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:20:53.534972 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:20:53.535539 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:20:53.536167 | orchestrator | 2025-02-04 09:20:53.536378 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:20:53.536788 | orchestrator | 2025-02-04 09:20:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:20:53.537424 | orchestrator | 2025-02-04 09:20:53 | INFO  | Please wait and do not abort execution. 2025-02-04 09:20:53.537457 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:20:53.537859 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:20:53.539289 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:20:53.539353 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:20:53.539984 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:20:53.540238 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:20:53.540655 | orchestrator | 2025-02-04 09:20:53.540802 | orchestrator | 2025-02-04 09:20:53.541233 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:20:53.541663 | orchestrator | Tuesday 04 February 2025 09:20:53 +0000 (0:00:12.600) 0:00:12.799 ****** 2025-02-04 09:20:53.541996 | orchestrator | =============================================================================== 2025-02-04 09:20:53.542326 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.60s 2025-02-04 09:20:54.144697 | orchestrator | + osism apply hddtemp 2025-02-04 09:20:55.631191 | orchestrator | 2025-02-04 09:20:55 | INFO  | Task bd1e55a0-31f7-434f-9634-0881749e4339 (hddtemp) was prepared for execution. 2025-02-04 09:20:58.974324 | orchestrator | 2025-02-04 09:20:55 | INFO  | It takes a moment until task bd1e55a0-31f7-434f-9634-0881749e4339 (hddtemp) has been started and output is visible here. 2025-02-04 09:20:58.974493 | orchestrator | 2025-02-04 09:20:58.976058 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-02-04 09:20:58.977507 | orchestrator | 2025-02-04 09:20:58.977604 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-02-04 09:20:58.978555 | orchestrator | Tuesday 04 February 2025 09:20:58 +0000 (0:00:00.218) 0:00:00.218 ****** 2025-02-04 09:20:59.153939 | orchestrator | ok: [testbed-manager] 2025-02-04 09:20:59.246302 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:20:59.325680 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:20:59.412481 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:20:59.603711 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:20:59.745179 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:20:59.746743 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:20:59.747209 | orchestrator | 2025-02-04 09:20:59.747226 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-02-04 09:20:59.748619 | orchestrator | Tuesday 04 February 2025 09:20:59 +0000 (0:00:00.770) 0:00:00.988 ****** 2025-02-04 09:21:01.050259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:21:01.050731 | orchestrator | 2025-02-04 09:21:01.052047 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-02-04 09:21:01.057755 | orchestrator | Tuesday 04 February 2025 09:21:01 +0000 (0:00:01.306) 0:00:02.294 ****** 2025-02-04 09:21:03.212603 | orchestrator | ok: [testbed-manager] 2025-02-04 09:21:03.215470 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:21:03.215611 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:21:03.215634 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:21:03.215651 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:21:03.216788 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:21:03.217025 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:21:03.217946 | orchestrator | 2025-02-04 09:21:03.218090 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-02-04 09:21:03.220730 | orchestrator | Tuesday 04 February 2025 09:21:03 +0000 (0:00:02.162) 0:00:04.456 ****** 2025-02-04 09:21:03.827774 | orchestrator | changed: [testbed-manager] 2025-02-04 09:21:04.025768 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:21:04.466263 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:21:04.467240 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:21:04.469840 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:21:04.470724 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:21:04.471137 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:21:04.472408 | orchestrator | 2025-02-04 09:21:04.473531 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-02-04 09:21:04.474624 | orchestrator | Tuesday 04 February 2025 09:21:04 +0000 (0:00:01.254) 0:00:05.711 ****** 2025-02-04 09:21:05.876841 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:21:05.877655 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:21:05.878372 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:21:05.879458 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:21:05.879700 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:21:05.880884 | orchestrator | ok: [testbed-manager] 2025-02-04 09:21:05.882321 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:21:05.883247 | orchestrator | 2025-02-04 09:21:05.885051 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-02-04 09:21:05.885936 | orchestrator | Tuesday 04 February 2025 09:21:05 +0000 (0:00:01.407) 0:00:07.119 ****** 2025-02-04 09:21:06.143280 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:21:06.246984 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:21:06.337553 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:21:06.447323 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:21:06.590756 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:21:06.592948 | orchestrator | changed: [testbed-manager] 2025-02-04 09:21:06.594113 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:21:06.595268 | orchestrator | 2025-02-04 09:21:06.595831 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-02-04 09:21:06.597059 | orchestrator | Tuesday 04 February 2025 09:21:06 +0000 (0:00:00.716) 0:00:07.835 ****** 2025-02-04 09:21:20.571410 | orchestrator | changed: [testbed-manager] 2025-02-04 09:21:20.571523 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:21:20.571533 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:21:20.571541 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:21:20.572777 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:21:20.574065 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:21:20.574989 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:21:20.576076 | orchestrator | 2025-02-04 09:21:20.576317 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-02-04 09:21:20.577197 | orchestrator | Tuesday 04 February 2025 09:21:20 +0000 (0:00:13.973) 0:00:21.809 ****** 2025-02-04 09:21:21.825264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:21:21.832054 | orchestrator | 2025-02-04 09:21:21.833754 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-02-04 09:21:21.834199 | orchestrator | Tuesday 04 February 2025 09:21:21 +0000 (0:00:01.256) 0:00:23.065 ****** 2025-02-04 09:21:23.851703 | orchestrator | changed: [testbed-manager] 2025-02-04 09:21:23.853736 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:21:23.854903 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:21:23.856651 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:21:23.859451 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:21:23.860771 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:21:23.862162 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:21:23.862609 | orchestrator | 2025-02-04 09:21:23.864506 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:21:23.864661 | orchestrator | 2025-02-04 09:21:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:21:23.865544 | orchestrator | 2025-02-04 09:21:23 | INFO  | Please wait and do not abort execution. 2025-02-04 09:21:23.865593 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:21:23.866257 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:23.866920 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:23.867841 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:23.868364 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:23.868947 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:23.869790 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:23.869904 | orchestrator | 2025-02-04 09:21:23.870804 | orchestrator | 2025-02-04 09:21:23.871327 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:21:23.871866 | orchestrator | Tuesday 04 February 2025 09:21:23 +0000 (0:00:02.032) 0:00:25.097 ****** 2025-02-04 09:21:23.872563 | orchestrator | =============================================================================== 2025-02-04 09:21:23.872957 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.97s 2025-02-04 09:21:23.873568 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.16s 2025-02-04 09:21:23.874009 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.03s 2025-02-04 09:21:23.874487 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.41s 2025-02-04 09:21:23.875169 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.31s 2025-02-04 09:21:23.875681 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.26s 2025-02-04 09:21:23.876123 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.26s 2025-02-04 09:21:23.876675 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.77s 2025-02-04 09:21:23.877170 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.72s 2025-02-04 09:21:24.494220 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-02-04 09:21:26.193295 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-02-04 09:21:26.226070 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-02-04 09:21:26.226255 | orchestrator | + local max_attempts=60 2025-02-04 09:21:26.226279 | orchestrator | + local name=ceph-ansible 2025-02-04 09:21:26.226295 | orchestrator | + local attempt_num=1 2025-02-04 09:21:26.226311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-02-04 09:21:26.226346 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:21:26.226432 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-02-04 09:21:26.226452 | orchestrator | + local max_attempts=60 2025-02-04 09:21:26.226467 | orchestrator | + local name=kolla-ansible 2025-02-04 09:21:26.226482 | orchestrator | + local attempt_num=1 2025-02-04 09:21:26.226501 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-02-04 09:21:26.256006 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:21:26.256528 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-02-04 09:21:26.256681 | orchestrator | + local max_attempts=60 2025-02-04 09:21:26.256703 | orchestrator | + local name=osism-ansible 2025-02-04 09:21:26.256719 | orchestrator | + local attempt_num=1 2025-02-04 09:21:26.256749 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-02-04 09:21:26.289233 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-02-04 09:21:26.468743 | orchestrator | + [[ true == \t\r\u\e ]] 2025-02-04 09:21:26.468864 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-02-04 09:21:26.468902 | orchestrator | ARA in ceph-ansible already disabled. 2025-02-04 09:21:26.636723 | orchestrator | ARA in kolla-ansible already disabled. 2025-02-04 09:21:26.813071 | orchestrator | ARA in osism-ansible already disabled. 2025-02-04 09:21:26.990324 | orchestrator | ARA in osism-kubernetes already disabled. 2025-02-04 09:21:26.990736 | orchestrator | + osism apply gather-facts 2025-02-04 09:21:28.465663 | orchestrator | 2025-02-04 09:21:28 | INFO  | Task da56f4c5-6f4d-4922-9ece-aef93223b3fa (gather-facts) was prepared for execution. 2025-02-04 09:21:31.760961 | orchestrator | 2025-02-04 09:21:28 | INFO  | It takes a moment until task da56f4c5-6f4d-4922-9ece-aef93223b3fa (gather-facts) has been started and output is visible here. 2025-02-04 09:21:31.761104 | orchestrator | 2025-02-04 09:21:31.763989 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-04 09:21:31.764035 | orchestrator | 2025-02-04 09:21:31.765432 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-04 09:21:36.984813 | orchestrator | Tuesday 04 February 2025 09:21:31 +0000 (0:00:00.191) 0:00:00.191 ****** 2025-02-04 09:21:36.984956 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:21:36.985660 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:21:36.985937 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:21:36.986516 | orchestrator | ok: [testbed-manager] 2025-02-04 09:21:36.987290 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:21:36.987864 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:21:36.989195 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:21:36.989938 | orchestrator | 2025-02-04 09:21:36.989992 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-04 09:21:36.990443 | orchestrator | 2025-02-04 09:21:36.990525 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-04 09:21:36.990566 | orchestrator | Tuesday 04 February 2025 09:21:36 +0000 (0:00:05.231) 0:00:05.422 ****** 2025-02-04 09:21:37.164239 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:21:37.263263 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:21:37.346892 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:21:37.436115 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:21:37.525207 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:21:37.567313 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:21:37.567484 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:21:37.568225 | orchestrator | 2025-02-04 09:21:37.568626 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:21:37.569000 | orchestrator | 2025-02-04 09:21:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:21:37.569355 | orchestrator | 2025-02-04 09:21:37 | INFO  | Please wait and do not abort execution. 2025-02-04 09:21:37.570116 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.570659 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.570859 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.571252 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.571666 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.571917 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.572175 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:21:37.572493 | orchestrator | 2025-02-04 09:21:37.572774 | orchestrator | 2025-02-04 09:21:37.573145 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:21:37.573629 | orchestrator | Tuesday 04 February 2025 09:21:37 +0000 (0:00:00.582) 0:00:06.005 ****** 2025-02-04 09:21:37.574177 | orchestrator | =============================================================================== 2025-02-04 09:21:37.574357 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.23s 2025-02-04 09:21:37.575033 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-02-04 09:21:38.223484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-02-04 09:21:38.243639 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-02-04 09:21:38.257884 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-02-04 09:21:38.272376 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-02-04 09:21:38.290123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-02-04 09:21:38.306310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-02-04 09:21:38.325963 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-02-04 09:21:38.343193 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-02-04 09:21:38.356611 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-02-04 09:21:38.370338 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-02-04 09:21:38.392315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-02-04 09:21:38.415782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-02-04 09:21:38.438102 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-02-04 09:21:38.461229 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-02-04 09:21:38.483278 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-02-04 09:21:38.503082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-02-04 09:21:38.528006 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-02-04 09:21:38.550522 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-02-04 09:21:38.569302 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-02-04 09:21:38.589052 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-02-04 09:21:38.613418 | orchestrator | + [[ false == \t\r\u\e ]] 2025-02-04 09:21:38.991587 | orchestrator | changed 2025-02-04 09:21:39.062785 | 2025-02-04 09:21:39.062978 | TASK [Deploy services] 2025-02-04 09:21:39.203628 | orchestrator | skipping: Conditional result was False 2025-02-04 09:21:39.225198 | 2025-02-04 09:21:39.225361 | TASK [Deploy in a nutshell] 2025-02-04 09:21:39.923371 | orchestrator | + set -e 2025-02-04 09:21:39.923802 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-02-04 09:21:39.923888 | orchestrator | ++ export INTERACTIVE=false 2025-02-04 09:21:39.923909 | orchestrator | ++ INTERACTIVE=false 2025-02-04 09:21:39.923955 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-02-04 09:21:39.923975 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-02-04 09:21:39.923991 | orchestrator | + source /opt/manager-vars.sh 2025-02-04 09:21:39.924019 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-02-04 09:21:39.924042 | orchestrator | ++ NUMBER_OF_NODES=6 2025-02-04 09:21:39.924058 | orchestrator | ++ export CEPH_VERSION=quincy 2025-02-04 09:21:39.924072 | orchestrator | ++ CEPH_VERSION=quincy 2025-02-04 09:21:39.924087 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-02-04 09:21:39.924101 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-02-04 09:21:39.924115 | orchestrator | ++ export MANAGER_VERSION=latest 2025-02-04 09:21:39.924130 | orchestrator | ++ MANAGER_VERSION=latest 2025-02-04 09:21:39.924144 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-02-04 09:21:39.924159 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-02-04 09:21:39.924173 | orchestrator | ++ export ARA=false 2025-02-04 09:21:39.924187 | orchestrator | ++ ARA=false 2025-02-04 09:21:39.924202 | orchestrator | ++ export TEMPEST=false 2025-02-04 09:21:39.924216 | orchestrator | ++ TEMPEST=false 2025-02-04 09:21:39.924230 | orchestrator | ++ export IS_ZUUL=true 2025-02-04 09:21:39.924244 | orchestrator | ++ IS_ZUUL=true 2025-02-04 09:21:39.924259 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 09:21:39.924274 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.89 2025-02-04 09:21:39.924288 | orchestrator | ++ export EXTERNAL_API=false 2025-02-04 09:21:39.924302 | orchestrator | ++ EXTERNAL_API=false 2025-02-04 09:21:39.924316 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-02-04 09:21:39.924330 | orchestrator | ++ IMAGE_USER=ubuntu 2025-02-04 09:21:39.924344 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-02-04 09:21:39.924359 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-02-04 09:21:39.924373 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-02-04 09:21:39.924406 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-02-04 09:21:39.924422 | orchestrator | + echo 2025-02-04 09:21:39.924437 | orchestrator | 2025-02-04 09:21:39.924452 | orchestrator | # PULL IMAGES 2025-02-04 09:21:39.924466 | orchestrator | 2025-02-04 09:21:39.924489 | orchestrator | + echo '# PULL IMAGES' 2025-02-04 09:21:39.925031 | orchestrator | + echo 2025-02-04 09:21:39.925061 | orchestrator | ++ semver latest 7.0.0 2025-02-04 09:21:39.998762 | orchestrator | + [[ -1 -ge 0 ]] 2025-02-04 09:21:41.541462 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-02-04 09:21:41.541706 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-02-04 09:21:41.541811 | orchestrator | 2025-02-04 09:21:41 | INFO  | Trying to run play pull-images in environment custom 2025-02-04 09:21:41.589137 | orchestrator | 2025-02-04 09:21:41 | INFO  | Task 92826edb-8c6f-41f1-8c1e-49ac2e945a14 (pull-images) was prepared for execution. 2025-02-04 09:21:45.324036 | orchestrator | 2025-02-04 09:21:41 | INFO  | It takes a moment until task 92826edb-8c6f-41f1-8c1e-49ac2e945a14 (pull-images) has been started and output is visible here. 2025-02-04 09:21:45.324291 | orchestrator | 2025-02-04 09:21:45.325645 | orchestrator | PLAY [Pull images] ************************************************************* 2025-02-04 09:21:45.325780 | orchestrator | 2025-02-04 09:21:45.327650 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-02-04 09:21:45.328238 | orchestrator | Tuesday 04 February 2025 09:21:45 +0000 (0:00:00.181) 0:00:00.181 ****** 2025-02-04 09:22:26.048038 | orchestrator | changed: [testbed-manager] 2025-02-04 09:22:26.048711 | orchestrator | 2025-02-04 09:22:26.048755 | orchestrator | TASK [Pull other images] ******************************************************* 2025-02-04 09:22:26.048782 | orchestrator | Tuesday 04 February 2025 09:22:26 +0000 (0:00:40.725) 0:00:40.907 ****** 2025-02-04 09:23:21.822332 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-02-04 09:23:21.822576 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-02-04 09:23:21.822605 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-02-04 09:23:21.822653 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-02-04 09:23:21.822682 | orchestrator | changed: [testbed-manager] => (item=common) 2025-02-04 09:23:21.822697 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-02-04 09:23:21.822716 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-02-04 09:23:21.822762 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-02-04 09:23:21.822778 | orchestrator | changed: [testbed-manager] => (item=heat) 2025-02-04 09:23:21.822797 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-02-04 09:23:21.822812 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-02-04 09:23:21.822847 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-02-04 09:23:21.822870 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-02-04 09:23:21.826056 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-02-04 09:23:21.826106 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-02-04 09:23:21.826577 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-02-04 09:23:21.826598 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-02-04 09:23:21.826612 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-02-04 09:23:21.826647 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-02-04 09:23:21.826663 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-02-04 09:23:21.826677 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-02-04 09:23:21.826693 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-02-04 09:23:21.826708 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-02-04 09:23:21.826722 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-02-04 09:23:21.826735 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-02-04 09:23:21.826750 | orchestrator | 2025-02-04 09:23:21.826765 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:23:21.826781 | orchestrator | 2025-02-04 09:23:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:23:21.826797 | orchestrator | 2025-02-04 09:23:21 | INFO  | Please wait and do not abort execution. 2025-02-04 09:23:21.826817 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:23:21.827049 | orchestrator | 2025-02-04 09:23:21.827433 | orchestrator | 2025-02-04 09:23:21.827909 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:23:21.828118 | orchestrator | Tuesday 04 February 2025 09:23:21 +0000 (0:00:55.773) 0:01:36.680 ****** 2025-02-04 09:23:21.828614 | orchestrator | =============================================================================== 2025-02-04 09:23:21.828874 | orchestrator | Pull other images ------------------------------------------------------ 55.77s 2025-02-04 09:23:21.829791 | orchestrator | Pull keystone image ---------------------------------------------------- 40.73s 2025-02-04 09:23:24.068531 | orchestrator | 2025-02-04 09:23:24 | INFO  | Trying to run play wipe-partitions in environment custom 2025-02-04 09:23:24.117361 | orchestrator | 2025-02-04 09:23:24 | INFO  | Task bc7a78ba-5031-4b3d-9139-7fad396e460c (wipe-partitions) was prepared for execution. 2025-02-04 09:23:27.597320 | orchestrator | 2025-02-04 09:23:24 | INFO  | It takes a moment until task bc7a78ba-5031-4b3d-9139-7fad396e460c (wipe-partitions) has been started and output is visible here. 2025-02-04 09:23:27.597438 | orchestrator | 2025-02-04 09:23:27.597776 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-02-04 09:23:27.597808 | orchestrator | 2025-02-04 09:23:27.597960 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-02-04 09:23:27.597981 | orchestrator | Tuesday 04 February 2025 09:23:27 +0000 (0:00:00.133) 0:00:00.133 ****** 2025-02-04 09:23:28.291681 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:23:28.291869 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:23:28.295301 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:23:28.296901 | orchestrator | 2025-02-04 09:23:28.296949 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-02-04 09:23:28.444926 | orchestrator | Tuesday 04 February 2025 09:23:28 +0000 (0:00:00.696) 0:00:00.829 ****** 2025-02-04 09:23:28.445081 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:23:28.564311 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:23:28.564524 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:23:28.564550 | orchestrator | 2025-02-04 09:23:28.564572 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-02-04 09:23:28.564889 | orchestrator | Tuesday 04 February 2025 09:23:28 +0000 (0:00:00.274) 0:00:01.103 ****** 2025-02-04 09:23:29.332200 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:23:29.333167 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:23:29.334938 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:23:29.534247 | orchestrator | 2025-02-04 09:23:29.534365 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-02-04 09:23:29.534385 | orchestrator | Tuesday 04 February 2025 09:23:29 +0000 (0:00:00.765) 0:00:01.868 ****** 2025-02-04 09:23:29.534417 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:23:29.624573 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:23:29.627173 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:23:29.627762 | orchestrator | 2025-02-04 09:23:29.627792 | orchestrator | TASK [Check device availability] *********************************************** 2025-02-04 09:23:29.627838 | orchestrator | Tuesday 04 February 2025 09:23:29 +0000 (0:00:00.295) 0:00:02.163 ****** 2025-02-04 09:23:30.912372 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-04 09:23:30.912843 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-04 09:23:30.913593 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-04 09:23:30.914469 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-04 09:23:30.915094 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-04 09:23:30.916199 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-04 09:23:30.919141 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-04 09:23:30.919245 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-04 09:23:30.919620 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-04 09:23:30.920442 | orchestrator | 2025-02-04 09:23:30.921418 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-02-04 09:23:30.921996 | orchestrator | Tuesday 04 February 2025 09:23:30 +0000 (0:00:01.286) 0:00:03.450 ****** 2025-02-04 09:23:32.375908 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-02-04 09:23:32.376497 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-02-04 09:23:32.377048 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-02-04 09:23:32.377790 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-02-04 09:23:32.381147 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-02-04 09:23:32.381914 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-02-04 09:23:32.381946 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-02-04 09:23:32.382806 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-02-04 09:23:32.383565 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-02-04 09:23:32.384791 | orchestrator | 2025-02-04 09:23:32.385153 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-02-04 09:23:32.385894 | orchestrator | Tuesday 04 February 2025 09:23:32 +0000 (0:00:01.463) 0:00:04.914 ****** 2025-02-04 09:23:34.782194 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-02-04 09:23:34.782391 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-02-04 09:23:34.783761 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-02-04 09:23:34.784163 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-02-04 09:23:34.784448 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-02-04 09:23:34.785235 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-02-04 09:23:34.785689 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-02-04 09:23:34.786128 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-02-04 09:23:34.786448 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-02-04 09:23:34.787841 | orchestrator | 2025-02-04 09:23:35.430253 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-02-04 09:23:35.430394 | orchestrator | Tuesday 04 February 2025 09:23:34 +0000 (0:00:02.403) 0:00:07.318 ****** 2025-02-04 09:23:35.430431 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:23:35.431328 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:23:35.431367 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:23:35.431835 | orchestrator | 2025-02-04 09:23:35.432456 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-02-04 09:23:35.433237 | orchestrator | Tuesday 04 February 2025 09:23:35 +0000 (0:00:00.650) 0:00:07.968 ****** 2025-02-04 09:23:36.181164 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:23:36.181870 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:23:36.184343 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:23:36.184695 | orchestrator | 2025-02-04 09:23:36.186323 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:23:36.186572 | orchestrator | 2025-02-04 09:23:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:23:36.186607 | orchestrator | 2025-02-04 09:23:36 | INFO  | Please wait and do not abort execution. 2025-02-04 09:23:36.186663 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:36.187662 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:36.188805 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:36.189141 | orchestrator | 2025-02-04 09:23:36.189447 | orchestrator | 2025-02-04 09:23:36.189904 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:23:36.190375 | orchestrator | Tuesday 04 February 2025 09:23:36 +0000 (0:00:00.751) 0:00:08.719 ****** 2025-02-04 09:23:36.190723 | orchestrator | =============================================================================== 2025-02-04 09:23:36.191182 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.40s 2025-02-04 09:23:36.191704 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.46s 2025-02-04 09:23:36.191900 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2025-02-04 09:23:36.192275 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.77s 2025-02-04 09:23:36.192845 | orchestrator | Request device events from the kernel ----------------------------------- 0.75s 2025-02-04 09:23:36.193175 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.70s 2025-02-04 09:23:36.193534 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2025-02-04 09:23:36.194299 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2025-02-04 09:23:38.639200 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-02-04 09:23:38.639338 | orchestrator | 2025-02-04 09:23:38 | INFO  | Task 47013afb-761a-4c77-8a70-e76709d3f53f (facts) was prepared for execution. 2025-02-04 09:23:42.145088 | orchestrator | 2025-02-04 09:23:38 | INFO  | It takes a moment until task 47013afb-761a-4c77-8a70-e76709d3f53f (facts) has been started and output is visible here. 2025-02-04 09:23:42.145225 | orchestrator | 2025-02-04 09:23:42.145299 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-04 09:23:42.149027 | orchestrator | 2025-02-04 09:23:42.149466 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-04 09:23:42.153366 | orchestrator | Tuesday 04 February 2025 09:23:42 +0000 (0:00:00.224) 0:00:00.224 ****** 2025-02-04 09:23:43.318819 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:23:43.318926 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:23:43.319553 | orchestrator | ok: [testbed-manager] 2025-02-04 09:23:43.322670 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:23:43.323085 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:23:43.323749 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:23:43.324784 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:23:43.324827 | orchestrator | 2025-02-04 09:23:43.328891 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-04 09:23:43.514201 | orchestrator | Tuesday 04 February 2025 09:23:43 +0000 (0:00:01.170) 0:00:01.394 ****** 2025-02-04 09:23:43.514342 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:23:43.600406 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:23:43.687543 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:23:43.769694 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:23:43.871051 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:23:44.722122 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:23:44.722550 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:23:44.722610 | orchestrator | 2025-02-04 09:23:44.722668 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-04 09:23:44.722710 | orchestrator | 2025-02-04 09:23:44.723594 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-04 09:23:44.727438 | orchestrator | Tuesday 04 February 2025 09:23:44 +0000 (0:00:01.406) 0:00:02.800 ****** 2025-02-04 09:23:50.287455 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:23:50.288241 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:23:50.288296 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:23:50.289453 | orchestrator | ok: [testbed-manager] 2025-02-04 09:23:50.293177 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:23:50.297127 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:23:50.297370 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:23:50.297607 | orchestrator | 2025-02-04 09:23:50.297999 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-04 09:23:50.298302 | orchestrator | 2025-02-04 09:23:50.301139 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-04 09:23:50.307373 | orchestrator | Tuesday 04 February 2025 09:23:50 +0000 (0:00:05.565) 0:00:08.365 ****** 2025-02-04 09:23:50.509142 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:23:50.591202 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:23:50.671029 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:23:50.753060 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:23:50.832278 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:23:50.873303 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:23:50.873494 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:23:50.873518 | orchestrator | 2025-02-04 09:23:50.873535 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:23:50.873557 | orchestrator | 2025-02-04 09:23:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:23:50.876410 | orchestrator | 2025-02-04 09:23:50 | INFO  | Please wait and do not abort execution. 2025-02-04 09:23:50.876596 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.876713 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.876851 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.877396 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.877581 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.877674 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.877920 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:23:50.878112 | orchestrator | 2025-02-04 09:23:50.878401 | orchestrator | 2025-02-04 09:23:50.878721 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:23:50.879006 | orchestrator | Tuesday 04 February 2025 09:23:50 +0000 (0:00:00.588) 0:00:08.954 ****** 2025-02-04 09:23:50.879222 | orchestrator | =============================================================================== 2025-02-04 09:23:50.879558 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.57s 2025-02-04 09:23:50.880709 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.41s 2025-02-04 09:23:50.884244 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2025-02-04 09:23:53.293110 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-02-04 09:23:53.293255 | orchestrator | 2025-02-04 09:23:53 | INFO  | Task 2c20a3fb-0291-4a86-85d1-0b40fee6e48f (ceph-configure-lvm-volumes) was prepared for execution. 2025-02-04 09:23:58.522116 | orchestrator | 2025-02-04 09:23:53 | INFO  | It takes a moment until task 2c20a3fb-0291-4a86-85d1-0b40fee6e48f (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-02-04 09:23:58.522345 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-04 09:23:59.276575 | orchestrator | 2025-02-04 09:23:59.279359 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-04 09:23:59.279431 | orchestrator | 2025-02-04 09:23:59.279943 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-04 09:23:59.280045 | orchestrator | Tuesday 04 February 2025 09:23:59 +0000 (0:00:00.618) 0:00:00.618 ****** 2025-02-04 09:23:59.591879 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-04 09:23:59.592027 | orchestrator | 2025-02-04 09:23:59.592052 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-04 09:23:59.594215 | orchestrator | Tuesday 04 February 2025 09:23:59 +0000 (0:00:00.314) 0:00:00.932 ****** 2025-02-04 09:23:59.922244 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:23:59.923708 | orchestrator | 2025-02-04 09:23:59.926173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:23:59.929971 | orchestrator | Tuesday 04 February 2025 09:23:59 +0000 (0:00:00.332) 0:00:01.265 ****** 2025-02-04 09:24:00.653610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-04 09:24:00.655516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-04 09:24:00.655804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-04 09:24:00.660494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-04 09:24:00.661736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-04 09:24:00.662316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-04 09:24:00.663872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-04 09:24:00.664414 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-04 09:24:00.665196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-04 09:24:00.665504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-04 09:24:00.668261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-04 09:24:00.669576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-04 09:24:00.670410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-04 09:24:00.670449 | orchestrator | 2025-02-04 09:24:00.670474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:00.671421 | orchestrator | Tuesday 04 February 2025 09:24:00 +0000 (0:00:00.732) 0:00:01.998 ****** 2025-02-04 09:24:01.020410 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:01.021212 | orchestrator | 2025-02-04 09:24:01.023584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:01.361525 | orchestrator | Tuesday 04 February 2025 09:24:01 +0000 (0:00:00.366) 0:00:02.364 ****** 2025-02-04 09:24:01.361738 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:01.362251 | orchestrator | 2025-02-04 09:24:01.362304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:01.362341 | orchestrator | Tuesday 04 February 2025 09:24:01 +0000 (0:00:00.340) 0:00:02.705 ****** 2025-02-04 09:24:01.667174 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:01.667457 | orchestrator | 2025-02-04 09:24:01.668102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:01.668868 | orchestrator | Tuesday 04 February 2025 09:24:01 +0000 (0:00:00.297) 0:00:03.002 ****** 2025-02-04 09:24:01.992013 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:01.994742 | orchestrator | 2025-02-04 09:24:01.996243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:01.998583 | orchestrator | Tuesday 04 February 2025 09:24:01 +0000 (0:00:00.329) 0:00:03.332 ****** 2025-02-04 09:24:02.295504 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:02.297548 | orchestrator | 2025-02-04 09:24:02.297612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:02.298249 | orchestrator | Tuesday 04 February 2025 09:24:02 +0000 (0:00:00.307) 0:00:03.640 ****** 2025-02-04 09:24:02.583247 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:02.583977 | orchestrator | 2025-02-04 09:24:02.585548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:02.939522 | orchestrator | Tuesday 04 February 2025 09:24:02 +0000 (0:00:00.287) 0:00:03.927 ****** 2025-02-04 09:24:02.939719 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:02.939905 | orchestrator | 2025-02-04 09:24:02.940442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:02.940475 | orchestrator | Tuesday 04 February 2025 09:24:02 +0000 (0:00:00.356) 0:00:04.283 ****** 2025-02-04 09:24:03.183610 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:03.184441 | orchestrator | 2025-02-04 09:24:03.184965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:03.185662 | orchestrator | Tuesday 04 February 2025 09:24:03 +0000 (0:00:00.245) 0:00:04.529 ****** 2025-02-04 09:24:04.262108 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557) 2025-02-04 09:24:04.262485 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557) 2025-02-04 09:24:04.263251 | orchestrator | 2025-02-04 09:24:04.263714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:04.263994 | orchestrator | Tuesday 04 February 2025 09:24:04 +0000 (0:00:01.076) 0:00:05.606 ****** 2025-02-04 09:24:04.875603 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68) 2025-02-04 09:24:04.876164 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68) 2025-02-04 09:24:04.876205 | orchestrator | 2025-02-04 09:24:04.876623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:04.877720 | orchestrator | Tuesday 04 February 2025 09:24:04 +0000 (0:00:00.611) 0:00:06.217 ****** 2025-02-04 09:24:05.380410 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b) 2025-02-04 09:24:05.380975 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b) 2025-02-04 09:24:05.382250 | orchestrator | 2025-02-04 09:24:05.383242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:05.383783 | orchestrator | Tuesday 04 February 2025 09:24:05 +0000 (0:00:00.508) 0:00:06.726 ****** 2025-02-04 09:24:05.941515 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe) 2025-02-04 09:24:06.334113 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe) 2025-02-04 09:24:06.334237 | orchestrator | 2025-02-04 09:24:06.334258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:06.334274 | orchestrator | Tuesday 04 February 2025 09:24:05 +0000 (0:00:00.553) 0:00:07.279 ****** 2025-02-04 09:24:06.334307 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-04 09:24:06.334927 | orchestrator | 2025-02-04 09:24:06.335508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:06.335542 | orchestrator | Tuesday 04 February 2025 09:24:06 +0000 (0:00:00.399) 0:00:07.679 ****** 2025-02-04 09:24:06.836560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-04 09:24:06.839340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-04 09:24:06.840782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-04 09:24:06.844453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-04 09:24:06.844524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-04 09:24:06.846941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-04 09:24:06.847211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-04 09:24:06.847230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-04 09:24:06.848526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-04 09:24:06.851813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-04 09:24:06.851857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-04 09:24:06.852575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-04 09:24:06.853156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-04 09:24:06.854246 | orchestrator | 2025-02-04 09:24:06.855239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:06.855846 | orchestrator | Tuesday 04 February 2025 09:24:06 +0000 (0:00:00.500) 0:00:08.179 ****** 2025-02-04 09:24:07.154253 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:07.155797 | orchestrator | 2025-02-04 09:24:07.155837 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:07.155861 | orchestrator | Tuesday 04 February 2025 09:24:07 +0000 (0:00:00.318) 0:00:08.498 ****** 2025-02-04 09:24:07.381705 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:07.383143 | orchestrator | 2025-02-04 09:24:07.385852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:07.386315 | orchestrator | Tuesday 04 February 2025 09:24:07 +0000 (0:00:00.229) 0:00:08.727 ****** 2025-02-04 09:24:07.611079 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:07.612951 | orchestrator | 2025-02-04 09:24:08.045828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:08.046225 | orchestrator | Tuesday 04 February 2025 09:24:07 +0000 (0:00:00.228) 0:00:08.955 ****** 2025-02-04 09:24:08.046357 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:08.046538 | orchestrator | 2025-02-04 09:24:08.048228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:08.048318 | orchestrator | Tuesday 04 February 2025 09:24:08 +0000 (0:00:00.434) 0:00:09.390 ****** 2025-02-04 09:24:08.270219 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:08.270752 | orchestrator | 2025-02-04 09:24:08.271178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:08.271544 | orchestrator | Tuesday 04 February 2025 09:24:08 +0000 (0:00:00.223) 0:00:09.614 ****** 2025-02-04 09:24:08.550542 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:08.551900 | orchestrator | 2025-02-04 09:24:08.552898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:08.554206 | orchestrator | Tuesday 04 February 2025 09:24:08 +0000 (0:00:00.281) 0:00:09.895 ****** 2025-02-04 09:24:08.833618 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:08.834873 | orchestrator | 2025-02-04 09:24:08.837750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:08.837802 | orchestrator | Tuesday 04 February 2025 09:24:08 +0000 (0:00:00.283) 0:00:10.178 ****** 2025-02-04 09:24:09.091569 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:09.929867 | orchestrator | 2025-02-04 09:24:09.930149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:09.930191 | orchestrator | Tuesday 04 February 2025 09:24:09 +0000 (0:00:00.258) 0:00:10.437 ****** 2025-02-04 09:24:09.930239 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-04 09:24:09.930397 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-04 09:24:09.930441 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-04 09:24:09.932404 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-04 09:24:09.932449 | orchestrator | 2025-02-04 09:24:09.932472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:10.210302 | orchestrator | Tuesday 04 February 2025 09:24:09 +0000 (0:00:00.837) 0:00:11.274 ****** 2025-02-04 09:24:10.210443 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:10.212087 | orchestrator | 2025-02-04 09:24:10.479049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:10.479173 | orchestrator | Tuesday 04 February 2025 09:24:10 +0000 (0:00:00.282) 0:00:11.557 ****** 2025-02-04 09:24:10.479210 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:10.479368 | orchestrator | 2025-02-04 09:24:10.479707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:10.480907 | orchestrator | Tuesday 04 February 2025 09:24:10 +0000 (0:00:00.264) 0:00:11.821 ****** 2025-02-04 09:24:10.694212 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:10.696517 | orchestrator | 2025-02-04 09:24:10.969107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:10.969218 | orchestrator | Tuesday 04 February 2025 09:24:10 +0000 (0:00:00.219) 0:00:12.041 ****** 2025-02-04 09:24:10.969251 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:10.970350 | orchestrator | 2025-02-04 09:24:10.970385 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-04 09:24:10.971370 | orchestrator | Tuesday 04 February 2025 09:24:10 +0000 (0:00:00.267) 0:00:12.308 ****** 2025-02-04 09:24:11.212536 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-02-04 09:24:11.212950 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-02-04 09:24:11.214240 | orchestrator | 2025-02-04 09:24:11.219208 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-04 09:24:11.219599 | orchestrator | Tuesday 04 February 2025 09:24:11 +0000 (0:00:00.248) 0:00:12.557 ****** 2025-02-04 09:24:11.718507 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:11.720962 | orchestrator | 2025-02-04 09:24:11.723540 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-04 09:24:11.727056 | orchestrator | Tuesday 04 February 2025 09:24:11 +0000 (0:00:00.500) 0:00:13.058 ****** 2025-02-04 09:24:11.965874 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:11.967865 | orchestrator | 2025-02-04 09:24:11.968706 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-04 09:24:11.969047 | orchestrator | Tuesday 04 February 2025 09:24:11 +0000 (0:00:00.224) 0:00:13.283 ****** 2025-02-04 09:24:12.234581 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:12.235396 | orchestrator | 2025-02-04 09:24:12.237066 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-04 09:24:12.237779 | orchestrator | Tuesday 04 February 2025 09:24:12 +0000 (0:00:00.295) 0:00:13.578 ****** 2025-02-04 09:24:12.549988 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:24:12.550252 | orchestrator | 2025-02-04 09:24:12.553204 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-04 09:24:12.553706 | orchestrator | Tuesday 04 February 2025 09:24:12 +0000 (0:00:00.316) 0:00:13.895 ****** 2025-02-04 09:24:12.841118 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b56b489-397c-55c4-ba6f-4e97fbbc410a'}}) 2025-02-04 09:24:12.842957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fd89a215-a86e-5b79-8dd1-0773a21fefe5'}}) 2025-02-04 09:24:12.843110 | orchestrator | 2025-02-04 09:24:12.843135 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-04 09:24:12.843156 | orchestrator | Tuesday 04 February 2025 09:24:12 +0000 (0:00:00.285) 0:00:14.181 ****** 2025-02-04 09:24:13.120314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b56b489-397c-55c4-ba6f-4e97fbbc410a'}})  2025-02-04 09:24:13.121805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fd89a215-a86e-5b79-8dd1-0773a21fefe5'}})  2025-02-04 09:24:13.121845 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:13.122007 | orchestrator | 2025-02-04 09:24:13.122292 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-04 09:24:13.122870 | orchestrator | Tuesday 04 February 2025 09:24:13 +0000 (0:00:00.282) 0:00:14.463 ****** 2025-02-04 09:24:13.354703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b56b489-397c-55c4-ba6f-4e97fbbc410a'}})  2025-02-04 09:24:13.354836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fd89a215-a86e-5b79-8dd1-0773a21fefe5'}})  2025-02-04 09:24:13.355351 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:13.358187 | orchestrator | 2025-02-04 09:24:13.358746 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-04 09:24:13.358775 | orchestrator | Tuesday 04 February 2025 09:24:13 +0000 (0:00:00.234) 0:00:14.698 ****** 2025-02-04 09:24:13.529963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b56b489-397c-55c4-ba6f-4e97fbbc410a'}})  2025-02-04 09:24:13.530138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fd89a215-a86e-5b79-8dd1-0773a21fefe5'}})  2025-02-04 09:24:13.530587 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:13.530606 | orchestrator | 2025-02-04 09:24:13.532296 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-04 09:24:13.532696 | orchestrator | Tuesday 04 February 2025 09:24:13 +0000 (0:00:00.176) 0:00:14.875 ****** 2025-02-04 09:24:13.694415 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:24:13.696055 | orchestrator | 2025-02-04 09:24:13.696120 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-04 09:24:13.696159 | orchestrator | Tuesday 04 February 2025 09:24:13 +0000 (0:00:00.164) 0:00:15.039 ****** 2025-02-04 09:24:13.882293 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:24:13.884399 | orchestrator | 2025-02-04 09:24:13.885190 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-04 09:24:13.885835 | orchestrator | Tuesday 04 February 2025 09:24:13 +0000 (0:00:00.182) 0:00:15.222 ****** 2025-02-04 09:24:14.030980 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:14.033425 | orchestrator | 2025-02-04 09:24:14.034832 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-04 09:24:14.035442 | orchestrator | Tuesday 04 February 2025 09:24:14 +0000 (0:00:00.151) 0:00:15.374 ****** 2025-02-04 09:24:14.195314 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:14.195947 | orchestrator | 2025-02-04 09:24:14.594548 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-04 09:24:14.594729 | orchestrator | Tuesday 04 February 2025 09:24:14 +0000 (0:00:00.165) 0:00:15.539 ****** 2025-02-04 09:24:14.594767 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:14.595741 | orchestrator | 2025-02-04 09:24:14.595849 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-04 09:24:14.599914 | orchestrator | Tuesday 04 February 2025 09:24:14 +0000 (0:00:00.397) 0:00:15.937 ****** 2025-02-04 09:24:14.756226 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:24:14.757478 | orchestrator |  "ceph_osd_devices": { 2025-02-04 09:24:14.757803 | orchestrator |  "sdb": { 2025-02-04 09:24:14.759031 | orchestrator |  "osd_lvm_uuid": "8b56b489-397c-55c4-ba6f-4e97fbbc410a" 2025-02-04 09:24:14.760187 | orchestrator |  }, 2025-02-04 09:24:14.762931 | orchestrator |  "sdc": { 2025-02-04 09:24:14.763065 | orchestrator |  "osd_lvm_uuid": "fd89a215-a86e-5b79-8dd1-0773a21fefe5" 2025-02-04 09:24:14.763094 | orchestrator |  } 2025-02-04 09:24:14.763110 | orchestrator |  } 2025-02-04 09:24:14.763124 | orchestrator | } 2025-02-04 09:24:14.763143 | orchestrator | 2025-02-04 09:24:14.764008 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-04 09:24:14.765269 | orchestrator | Tuesday 04 February 2025 09:24:14 +0000 (0:00:00.164) 0:00:16.102 ****** 2025-02-04 09:24:14.898332 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:14.899979 | orchestrator | 2025-02-04 09:24:14.900037 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-04 09:24:14.900536 | orchestrator | Tuesday 04 February 2025 09:24:14 +0000 (0:00:00.141) 0:00:16.244 ****** 2025-02-04 09:24:15.041112 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:15.042382 | orchestrator | 2025-02-04 09:24:15.042858 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-04 09:24:15.043758 | orchestrator | Tuesday 04 February 2025 09:24:15 +0000 (0:00:00.140) 0:00:16.384 ****** 2025-02-04 09:24:15.182901 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:24:15.183202 | orchestrator | 2025-02-04 09:24:15.183764 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-04 09:24:15.185114 | orchestrator | Tuesday 04 February 2025 09:24:15 +0000 (0:00:00.144) 0:00:16.529 ****** 2025-02-04 09:24:15.486854 | orchestrator | changed: [testbed-node-3] => { 2025-02-04 09:24:15.488059 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-04 09:24:15.488081 | orchestrator |  "ceph_osd_devices": { 2025-02-04 09:24:15.488091 | orchestrator |  "sdb": { 2025-02-04 09:24:15.488100 | orchestrator |  "osd_lvm_uuid": "8b56b489-397c-55c4-ba6f-4e97fbbc410a" 2025-02-04 09:24:15.488114 | orchestrator |  }, 2025-02-04 09:24:15.488519 | orchestrator |  "sdc": { 2025-02-04 09:24:15.488537 | orchestrator |  "osd_lvm_uuid": "fd89a215-a86e-5b79-8dd1-0773a21fefe5" 2025-02-04 09:24:15.489283 | orchestrator |  } 2025-02-04 09:24:15.489741 | orchestrator |  }, 2025-02-04 09:24:15.491008 | orchestrator |  "lvm_volumes": [ 2025-02-04 09:24:15.491403 | orchestrator |  { 2025-02-04 09:24:15.491970 | orchestrator |  "data": "osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a", 2025-02-04 09:24:15.492512 | orchestrator |  "data_vg": "ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a" 2025-02-04 09:24:15.494123 | orchestrator |  }, 2025-02-04 09:24:15.495585 | orchestrator |  { 2025-02-04 09:24:15.495993 | orchestrator |  "data": "osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5", 2025-02-04 09:24:15.497084 | orchestrator |  "data_vg": "ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5" 2025-02-04 09:24:15.498957 | orchestrator |  } 2025-02-04 09:24:15.499575 | orchestrator |  ] 2025-02-04 09:24:15.501073 | orchestrator |  } 2025-02-04 09:24:15.501448 | orchestrator | } 2025-02-04 09:24:15.501489 | orchestrator | 2025-02-04 09:24:15.501891 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-04 09:24:15.502260 | orchestrator | Tuesday 04 February 2025 09:24:15 +0000 (0:00:00.301) 0:00:16.831 ****** 2025-02-04 09:24:18.298240 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-04 09:24:18.300134 | orchestrator | 2025-02-04 09:24:18.302545 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-04 09:24:18.302614 | orchestrator | 2025-02-04 09:24:18.302694 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-04 09:24:18.304076 | orchestrator | Tuesday 04 February 2025 09:24:18 +0000 (0:00:02.806) 0:00:19.638 ****** 2025-02-04 09:24:18.708413 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-04 09:24:18.709940 | orchestrator | 2025-02-04 09:24:18.991345 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-04 09:24:18.991522 | orchestrator | Tuesday 04 February 2025 09:24:18 +0000 (0:00:00.412) 0:00:20.051 ****** 2025-02-04 09:24:18.991564 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:24:18.991687 | orchestrator | 2025-02-04 09:24:18.993002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:18.995987 | orchestrator | Tuesday 04 February 2025 09:24:18 +0000 (0:00:00.283) 0:00:20.334 ****** 2025-02-04 09:24:19.515483 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-04 09:24:19.516916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-04 09:24:19.518302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-04 09:24:19.520199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-04 09:24:19.520329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-04 09:24:19.521049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-04 09:24:19.522690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-04 09:24:19.524323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-04 09:24:19.524351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-04 09:24:19.524364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-04 09:24:19.524382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-04 09:24:19.526355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-04 09:24:19.526482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-04 09:24:19.526506 | orchestrator | 2025-02-04 09:24:19.526694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:19.528251 | orchestrator | Tuesday 04 February 2025 09:24:19 +0000 (0:00:00.526) 0:00:20.861 ****** 2025-02-04 09:24:19.764795 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:19.767128 | orchestrator | 2025-02-04 09:24:19.767485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:19.767952 | orchestrator | Tuesday 04 February 2025 09:24:19 +0000 (0:00:00.250) 0:00:21.111 ****** 2025-02-04 09:24:20.029610 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:20.030104 | orchestrator | 2025-02-04 09:24:20.031067 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:20.031507 | orchestrator | Tuesday 04 February 2025 09:24:20 +0000 (0:00:00.263) 0:00:21.375 ****** 2025-02-04 09:24:20.826905 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:20.827166 | orchestrator | 2025-02-04 09:24:20.828977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:20.829689 | orchestrator | Tuesday 04 February 2025 09:24:20 +0000 (0:00:00.795) 0:00:22.170 ****** 2025-02-04 09:24:21.055176 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:21.056043 | orchestrator | 2025-02-04 09:24:21.056821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:21.057209 | orchestrator | Tuesday 04 February 2025 09:24:21 +0000 (0:00:00.230) 0:00:22.401 ****** 2025-02-04 09:24:21.283370 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:21.283756 | orchestrator | 2025-02-04 09:24:21.283990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:21.285994 | orchestrator | Tuesday 04 February 2025 09:24:21 +0000 (0:00:00.229) 0:00:22.630 ****** 2025-02-04 09:24:21.549174 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:21.549782 | orchestrator | 2025-02-04 09:24:21.550447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:21.551740 | orchestrator | Tuesday 04 February 2025 09:24:21 +0000 (0:00:00.262) 0:00:22.893 ****** 2025-02-04 09:24:21.802746 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:21.804554 | orchestrator | 2025-02-04 09:24:21.806656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:21.807113 | orchestrator | Tuesday 04 February 2025 09:24:21 +0000 (0:00:00.251) 0:00:23.144 ****** 2025-02-04 09:24:22.067800 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:22.067974 | orchestrator | 2025-02-04 09:24:22.069991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:22.071345 | orchestrator | Tuesday 04 February 2025 09:24:22 +0000 (0:00:00.265) 0:00:23.410 ****** 2025-02-04 09:24:22.665774 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863) 2025-02-04 09:24:22.669390 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863) 2025-02-04 09:24:22.669685 | orchestrator | 2025-02-04 09:24:22.669859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:22.670242 | orchestrator | Tuesday 04 February 2025 09:24:22 +0000 (0:00:00.600) 0:00:24.010 ****** 2025-02-04 09:24:23.179599 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8) 2025-02-04 09:24:23.181202 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8) 2025-02-04 09:24:23.181999 | orchestrator | 2025-02-04 09:24:23.185355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:23.658612 | orchestrator | Tuesday 04 February 2025 09:24:23 +0000 (0:00:00.514) 0:00:24.525 ****** 2025-02-04 09:24:23.658793 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae) 2025-02-04 09:24:23.662725 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae) 2025-02-04 09:24:23.662775 | orchestrator | 2025-02-04 09:24:23.662803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:24.404996 | orchestrator | Tuesday 04 February 2025 09:24:23 +0000 (0:00:00.478) 0:00:25.004 ****** 2025-02-04 09:24:24.405113 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40) 2025-02-04 09:24:24.405279 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40) 2025-02-04 09:24:24.405302 | orchestrator | 2025-02-04 09:24:24.405323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:24.405681 | orchestrator | Tuesday 04 February 2025 09:24:24 +0000 (0:00:00.746) 0:00:25.750 ****** 2025-02-04 09:24:25.283705 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-04 09:24:25.283851 | orchestrator | 2025-02-04 09:24:25.284547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:25.285187 | orchestrator | Tuesday 04 February 2025 09:24:25 +0000 (0:00:00.879) 0:00:26.630 ****** 2025-02-04 09:24:25.878186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-04 09:24:25.878301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-04 09:24:25.878316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-04 09:24:25.883251 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-04 09:24:25.884148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-04 09:24:25.884173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-04 09:24:25.884218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-04 09:24:25.888371 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-04 09:24:25.888506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-04 09:24:25.888532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-04 09:24:25.889789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-04 09:24:26.158825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-04 09:24:26.158934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-04 09:24:26.158950 | orchestrator | 2025-02-04 09:24:26.158965 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:26.158978 | orchestrator | Tuesday 04 February 2025 09:24:25 +0000 (0:00:00.587) 0:00:27.217 ****** 2025-02-04 09:24:26.159007 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:26.159991 | orchestrator | 2025-02-04 09:24:26.160090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:26.163778 | orchestrator | Tuesday 04 February 2025 09:24:26 +0000 (0:00:00.272) 0:00:27.490 ****** 2025-02-04 09:24:26.448521 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:26.448710 | orchestrator | 2025-02-04 09:24:26.448735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:26.448756 | orchestrator | Tuesday 04 February 2025 09:24:26 +0000 (0:00:00.301) 0:00:27.792 ****** 2025-02-04 09:24:26.685997 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:26.686275 | orchestrator | 2025-02-04 09:24:26.686312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:26.686850 | orchestrator | Tuesday 04 February 2025 09:24:26 +0000 (0:00:00.240) 0:00:28.032 ****** 2025-02-04 09:24:26.976586 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:26.977005 | orchestrator | 2025-02-04 09:24:26.977129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:26.977474 | orchestrator | Tuesday 04 February 2025 09:24:26 +0000 (0:00:00.290) 0:00:28.322 ****** 2025-02-04 09:24:27.241341 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:27.528960 | orchestrator | 2025-02-04 09:24:27.529134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:27.529160 | orchestrator | Tuesday 04 February 2025 09:24:27 +0000 (0:00:00.263) 0:00:28.586 ****** 2025-02-04 09:24:27.529223 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:27.529317 | orchestrator | 2025-02-04 09:24:27.529770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:27.786828 | orchestrator | Tuesday 04 February 2025 09:24:27 +0000 (0:00:00.283) 0:00:28.870 ****** 2025-02-04 09:24:27.786963 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:27.787689 | orchestrator | 2025-02-04 09:24:27.788063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:27.788620 | orchestrator | Tuesday 04 February 2025 09:24:27 +0000 (0:00:00.261) 0:00:29.131 ****** 2025-02-04 09:24:28.096838 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:28.097784 | orchestrator | 2025-02-04 09:24:28.097953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:28.097994 | orchestrator | Tuesday 04 February 2025 09:24:28 +0000 (0:00:00.311) 0:00:29.443 ****** 2025-02-04 09:24:29.173289 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-04 09:24:29.176163 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-04 09:24:29.177017 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-04 09:24:29.177053 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-04 09:24:29.177076 | orchestrator | 2025-02-04 09:24:29.177907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:29.182270 | orchestrator | Tuesday 04 February 2025 09:24:29 +0000 (0:00:01.074) 0:00:30.517 ****** 2025-02-04 09:24:29.410475 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:29.410765 | orchestrator | 2025-02-04 09:24:29.411751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:29.412046 | orchestrator | Tuesday 04 February 2025 09:24:29 +0000 (0:00:00.239) 0:00:30.757 ****** 2025-02-04 09:24:29.669591 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:29.669780 | orchestrator | 2025-02-04 09:24:29.670144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:29.670792 | orchestrator | Tuesday 04 February 2025 09:24:29 +0000 (0:00:00.255) 0:00:31.013 ****** 2025-02-04 09:24:29.939099 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:30.204013 | orchestrator | 2025-02-04 09:24:30.204129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:30.204149 | orchestrator | Tuesday 04 February 2025 09:24:29 +0000 (0:00:00.265) 0:00:31.279 ****** 2025-02-04 09:24:30.204179 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:30.204255 | orchestrator | 2025-02-04 09:24:30.204278 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-04 09:24:30.204922 | orchestrator | Tuesday 04 February 2025 09:24:30 +0000 (0:00:00.271) 0:00:31.550 ****** 2025-02-04 09:24:30.429704 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-02-04 09:24:30.429833 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-02-04 09:24:30.430711 | orchestrator | 2025-02-04 09:24:30.430857 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-04 09:24:30.430872 | orchestrator | Tuesday 04 February 2025 09:24:30 +0000 (0:00:00.224) 0:00:31.774 ****** 2025-02-04 09:24:30.580276 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:30.580447 | orchestrator | 2025-02-04 09:24:30.580469 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-04 09:24:30.580492 | orchestrator | Tuesday 04 February 2025 09:24:30 +0000 (0:00:00.150) 0:00:31.924 ****** 2025-02-04 09:24:30.724452 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:30.724736 | orchestrator | 2025-02-04 09:24:30.724773 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-04 09:24:30.725301 | orchestrator | Tuesday 04 February 2025 09:24:30 +0000 (0:00:00.145) 0:00:32.070 ****** 2025-02-04 09:24:30.901313 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:30.901937 | orchestrator | 2025-02-04 09:24:31.064286 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-04 09:24:31.064373 | orchestrator | Tuesday 04 February 2025 09:24:30 +0000 (0:00:00.177) 0:00:32.247 ****** 2025-02-04 09:24:31.064392 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:24:31.064849 | orchestrator | 2025-02-04 09:24:31.065241 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-04 09:24:31.065893 | orchestrator | Tuesday 04 February 2025 09:24:31 +0000 (0:00:00.162) 0:00:32.410 ****** 2025-02-04 09:24:31.266421 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9a0f878-ef24-53af-8bd4-10a12036221e'}}) 2025-02-04 09:24:31.266640 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '857e455f-002b-509a-b66d-9c4a1025daeb'}}) 2025-02-04 09:24:31.266700 | orchestrator | 2025-02-04 09:24:31.266815 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-04 09:24:31.267725 | orchestrator | Tuesday 04 February 2025 09:24:31 +0000 (0:00:00.201) 0:00:32.612 ****** 2025-02-04 09:24:31.647976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9a0f878-ef24-53af-8bd4-10a12036221e'}})  2025-02-04 09:24:31.649383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '857e455f-002b-509a-b66d-9c4a1025daeb'}})  2025-02-04 09:24:31.651000 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:31.651035 | orchestrator | 2025-02-04 09:24:31.817557 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-04 09:24:31.817719 | orchestrator | Tuesday 04 February 2025 09:24:31 +0000 (0:00:00.379) 0:00:32.992 ****** 2025-02-04 09:24:31.817756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9a0f878-ef24-53af-8bd4-10a12036221e'}})  2025-02-04 09:24:31.818060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '857e455f-002b-509a-b66d-9c4a1025daeb'}})  2025-02-04 09:24:31.818097 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:31.818294 | orchestrator | 2025-02-04 09:24:31.821479 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-04 09:24:31.823777 | orchestrator | Tuesday 04 February 2025 09:24:31 +0000 (0:00:00.169) 0:00:33.162 ****** 2025-02-04 09:24:31.983538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9a0f878-ef24-53af-8bd4-10a12036221e'}})  2025-02-04 09:24:31.983720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '857e455f-002b-509a-b66d-9c4a1025daeb'}})  2025-02-04 09:24:31.984408 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:31.988102 | orchestrator | 2025-02-04 09:24:31.989420 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-04 09:24:31.990464 | orchestrator | Tuesday 04 February 2025 09:24:31 +0000 (0:00:00.167) 0:00:33.329 ****** 2025-02-04 09:24:32.147337 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:24:32.147791 | orchestrator | 2025-02-04 09:24:32.148382 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-04 09:24:32.149031 | orchestrator | Tuesday 04 February 2025 09:24:32 +0000 (0:00:00.163) 0:00:33.493 ****** 2025-02-04 09:24:32.343896 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:24:32.344050 | orchestrator | 2025-02-04 09:24:32.344077 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-04 09:24:32.350731 | orchestrator | Tuesday 04 February 2025 09:24:32 +0000 (0:00:00.194) 0:00:33.688 ****** 2025-02-04 09:24:32.555632 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:32.557220 | orchestrator | 2025-02-04 09:24:32.557705 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-04 09:24:32.561120 | orchestrator | Tuesday 04 February 2025 09:24:32 +0000 (0:00:00.195) 0:00:33.883 ****** 2025-02-04 09:24:32.740448 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:32.741331 | orchestrator | 2025-02-04 09:24:32.745774 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-04 09:24:32.745878 | orchestrator | Tuesday 04 February 2025 09:24:32 +0000 (0:00:00.181) 0:00:34.064 ****** 2025-02-04 09:24:32.874334 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:32.876728 | orchestrator | 2025-02-04 09:24:32.876854 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-04 09:24:33.035529 | orchestrator | Tuesday 04 February 2025 09:24:32 +0000 (0:00:00.156) 0:00:34.221 ****** 2025-02-04 09:24:33.035735 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:24:33.035848 | orchestrator |  "ceph_osd_devices": { 2025-02-04 09:24:33.035873 | orchestrator |  "sdb": { 2025-02-04 09:24:33.036310 | orchestrator |  "osd_lvm_uuid": "a9a0f878-ef24-53af-8bd4-10a12036221e" 2025-02-04 09:24:33.036342 | orchestrator |  }, 2025-02-04 09:24:33.036722 | orchestrator |  "sdc": { 2025-02-04 09:24:33.037010 | orchestrator |  "osd_lvm_uuid": "857e455f-002b-509a-b66d-9c4a1025daeb" 2025-02-04 09:24:33.037553 | orchestrator |  } 2025-02-04 09:24:33.037738 | orchestrator |  } 2025-02-04 09:24:33.037995 | orchestrator | } 2025-02-04 09:24:33.038249 | orchestrator | 2025-02-04 09:24:33.038541 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-04 09:24:33.038911 | orchestrator | Tuesday 04 February 2025 09:24:33 +0000 (0:00:00.154) 0:00:34.375 ****** 2025-02-04 09:24:33.180041 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:33.180515 | orchestrator | 2025-02-04 09:24:33.183306 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-04 09:24:33.183892 | orchestrator | Tuesday 04 February 2025 09:24:33 +0000 (0:00:00.148) 0:00:34.524 ****** 2025-02-04 09:24:33.341358 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:33.342692 | orchestrator | 2025-02-04 09:24:33.343465 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-04 09:24:33.344519 | orchestrator | Tuesday 04 February 2025 09:24:33 +0000 (0:00:00.160) 0:00:34.685 ****** 2025-02-04 09:24:33.480886 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:24:33.482413 | orchestrator | 2025-02-04 09:24:33.484182 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-04 09:24:33.484537 | orchestrator | Tuesday 04 February 2025 09:24:33 +0000 (0:00:00.141) 0:00:34.826 ****** 2025-02-04 09:24:33.985814 | orchestrator | changed: [testbed-node-4] => { 2025-02-04 09:24:33.987364 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-04 09:24:33.987615 | orchestrator |  "ceph_osd_devices": { 2025-02-04 09:24:33.988442 | orchestrator |  "sdb": { 2025-02-04 09:24:33.989051 | orchestrator |  "osd_lvm_uuid": "a9a0f878-ef24-53af-8bd4-10a12036221e" 2025-02-04 09:24:33.989446 | orchestrator |  }, 2025-02-04 09:24:33.990063 | orchestrator |  "sdc": { 2025-02-04 09:24:33.990865 | orchestrator |  "osd_lvm_uuid": "857e455f-002b-509a-b66d-9c4a1025daeb" 2025-02-04 09:24:33.991189 | orchestrator |  } 2025-02-04 09:24:33.991801 | orchestrator |  }, 2025-02-04 09:24:33.992096 | orchestrator |  "lvm_volumes": [ 2025-02-04 09:24:33.992634 | orchestrator |  { 2025-02-04 09:24:33.993002 | orchestrator |  "data": "osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e", 2025-02-04 09:24:33.993301 | orchestrator |  "data_vg": "ceph-a9a0f878-ef24-53af-8bd4-10a12036221e" 2025-02-04 09:24:33.994145 | orchestrator |  }, 2025-02-04 09:24:33.994300 | orchestrator |  { 2025-02-04 09:24:33.994717 | orchestrator |  "data": "osd-block-857e455f-002b-509a-b66d-9c4a1025daeb", 2025-02-04 09:24:33.994914 | orchestrator |  "data_vg": "ceph-857e455f-002b-509a-b66d-9c4a1025daeb" 2025-02-04 09:24:33.995298 | orchestrator |  } 2025-02-04 09:24:33.996859 | orchestrator |  ] 2025-02-04 09:24:33.997028 | orchestrator |  } 2025-02-04 09:24:33.997701 | orchestrator | } 2025-02-04 09:24:33.997931 | orchestrator | 2025-02-04 09:24:33.998381 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-04 09:24:33.998877 | orchestrator | Tuesday 04 February 2025 09:24:33 +0000 (0:00:00.504) 0:00:35.330 ****** 2025-02-04 09:24:35.486209 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-04 09:24:35.492721 | orchestrator | 2025-02-04 09:24:35.493999 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-02-04 09:24:35.494192 | orchestrator | 2025-02-04 09:24:35.494212 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-04 09:24:35.494238 | orchestrator | Tuesday 04 February 2025 09:24:35 +0000 (0:00:01.499) 0:00:36.830 ****** 2025-02-04 09:24:36.155544 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-04 09:24:36.158072 | orchestrator | 2025-02-04 09:24:36.158525 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-04 09:24:36.159748 | orchestrator | Tuesday 04 February 2025 09:24:36 +0000 (0:00:00.668) 0:00:37.498 ****** 2025-02-04 09:24:36.432120 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:24:36.432337 | orchestrator | 2025-02-04 09:24:36.433091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:36.434847 | orchestrator | Tuesday 04 February 2025 09:24:36 +0000 (0:00:00.278) 0:00:37.777 ****** 2025-02-04 09:24:36.849028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-04 09:24:36.849985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-04 09:24:36.850629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-04 09:24:36.851421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-04 09:24:36.852220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-04 09:24:36.853315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-04 09:24:36.853876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-04 09:24:36.854814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-04 09:24:36.855149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-04 09:24:36.855863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-04 09:24:36.856297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-04 09:24:36.857103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-04 09:24:36.857333 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-04 09:24:36.858100 | orchestrator | 2025-02-04 09:24:36.858853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:36.859107 | orchestrator | Tuesday 04 February 2025 09:24:36 +0000 (0:00:00.414) 0:00:38.192 ****** 2025-02-04 09:24:37.091438 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:37.092026 | orchestrator | 2025-02-04 09:24:37.092757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:37.093386 | orchestrator | Tuesday 04 February 2025 09:24:37 +0000 (0:00:00.245) 0:00:38.438 ****** 2025-02-04 09:24:37.309978 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:37.310825 | orchestrator | 2025-02-04 09:24:37.312084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:37.313210 | orchestrator | Tuesday 04 February 2025 09:24:37 +0000 (0:00:00.216) 0:00:38.654 ****** 2025-02-04 09:24:37.594888 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:37.595113 | orchestrator | 2025-02-04 09:24:37.597737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:37.598212 | orchestrator | Tuesday 04 February 2025 09:24:37 +0000 (0:00:00.283) 0:00:38.938 ****** 2025-02-04 09:24:37.967704 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:37.969089 | orchestrator | 2025-02-04 09:24:37.971317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:37.971917 | orchestrator | Tuesday 04 February 2025 09:24:37 +0000 (0:00:00.372) 0:00:39.310 ****** 2025-02-04 09:24:38.270140 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:38.270723 | orchestrator | 2025-02-04 09:24:38.270891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:38.271566 | orchestrator | Tuesday 04 February 2025 09:24:38 +0000 (0:00:00.306) 0:00:39.617 ****** 2025-02-04 09:24:38.503867 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:38.503984 | orchestrator | 2025-02-04 09:24:38.503996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:38.504309 | orchestrator | Tuesday 04 February 2025 09:24:38 +0000 (0:00:00.232) 0:00:39.849 ****** 2025-02-04 09:24:38.716260 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:38.716421 | orchestrator | 2025-02-04 09:24:38.717906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:38.721607 | orchestrator | Tuesday 04 February 2025 09:24:38 +0000 (0:00:00.211) 0:00:40.061 ****** 2025-02-04 09:24:39.405028 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:39.405196 | orchestrator | 2025-02-04 09:24:39.405219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:39.405240 | orchestrator | Tuesday 04 February 2025 09:24:39 +0000 (0:00:00.689) 0:00:40.750 ****** 2025-02-04 09:24:39.831826 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc) 2025-02-04 09:24:39.832054 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc) 2025-02-04 09:24:39.833118 | orchestrator | 2025-02-04 09:24:39.833278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:39.834171 | orchestrator | Tuesday 04 February 2025 09:24:39 +0000 (0:00:00.423) 0:00:41.174 ****** 2025-02-04 09:24:40.261220 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a) 2025-02-04 09:24:40.261559 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a) 2025-02-04 09:24:40.261890 | orchestrator | 2025-02-04 09:24:40.262280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:40.262634 | orchestrator | Tuesday 04 February 2025 09:24:40 +0000 (0:00:00.432) 0:00:41.607 ****** 2025-02-04 09:24:40.734859 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa) 2025-02-04 09:24:40.737347 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa) 2025-02-04 09:24:40.737691 | orchestrator | 2025-02-04 09:24:40.738506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:40.739504 | orchestrator | Tuesday 04 February 2025 09:24:40 +0000 (0:00:00.471) 0:00:42.079 ****** 2025-02-04 09:24:41.161329 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543) 2025-02-04 09:24:41.161572 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543) 2025-02-04 09:24:41.161597 | orchestrator | 2025-02-04 09:24:41.161616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:24:41.162085 | orchestrator | Tuesday 04 February 2025 09:24:41 +0000 (0:00:00.428) 0:00:42.507 ****** 2025-02-04 09:24:41.524148 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-04 09:24:41.524499 | orchestrator | 2025-02-04 09:24:41.525102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:41.525372 | orchestrator | Tuesday 04 February 2025 09:24:41 +0000 (0:00:00.361) 0:00:42.869 ****** 2025-02-04 09:24:41.975266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-04 09:24:41.975464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-04 09:24:41.975522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-04 09:24:41.975987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-04 09:24:41.976173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-04 09:24:41.976748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-04 09:24:41.977548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-04 09:24:41.978124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-04 09:24:41.978373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-04 09:24:41.978923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-04 09:24:41.979127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-04 09:24:41.979472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-04 09:24:41.979828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-04 09:24:41.980232 | orchestrator | 2025-02-04 09:24:41.980359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:41.980874 | orchestrator | Tuesday 04 February 2025 09:24:41 +0000 (0:00:00.451) 0:00:43.320 ****** 2025-02-04 09:24:42.195737 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:42.195987 | orchestrator | 2025-02-04 09:24:42.197285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:42.439683 | orchestrator | Tuesday 04 February 2025 09:24:42 +0000 (0:00:00.220) 0:00:43.541 ****** 2025-02-04 09:24:42.439899 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:42.439980 | orchestrator | 2025-02-04 09:24:42.441567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:42.441679 | orchestrator | Tuesday 04 February 2025 09:24:42 +0000 (0:00:00.243) 0:00:43.784 ****** 2025-02-04 09:24:43.123183 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:43.123785 | orchestrator | 2025-02-04 09:24:43.124421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:43.125480 | orchestrator | Tuesday 04 February 2025 09:24:43 +0000 (0:00:00.683) 0:00:44.468 ****** 2025-02-04 09:24:43.387986 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:43.388703 | orchestrator | 2025-02-04 09:24:43.388793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:43.388872 | orchestrator | Tuesday 04 February 2025 09:24:43 +0000 (0:00:00.265) 0:00:44.733 ****** 2025-02-04 09:24:43.625787 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:43.626224 | orchestrator | 2025-02-04 09:24:43.626307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:43.626824 | orchestrator | Tuesday 04 February 2025 09:24:43 +0000 (0:00:00.238) 0:00:44.972 ****** 2025-02-04 09:24:43.840281 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:43.840532 | orchestrator | 2025-02-04 09:24:44.083041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:44.083158 | orchestrator | Tuesday 04 February 2025 09:24:43 +0000 (0:00:00.213) 0:00:45.185 ****** 2025-02-04 09:24:44.083193 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:44.083645 | orchestrator | 2025-02-04 09:24:44.083715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:44.084301 | orchestrator | Tuesday 04 February 2025 09:24:44 +0000 (0:00:00.242) 0:00:45.428 ****** 2025-02-04 09:24:44.309525 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:44.309940 | orchestrator | 2025-02-04 09:24:44.309984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:44.310159 | orchestrator | Tuesday 04 February 2025 09:24:44 +0000 (0:00:00.227) 0:00:45.655 ****** 2025-02-04 09:24:45.028712 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-04 09:24:45.029173 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-04 09:24:45.029225 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-04 09:24:45.029318 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-04 09:24:45.030354 | orchestrator | 2025-02-04 09:24:45.030892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:45.030964 | orchestrator | Tuesday 04 February 2025 09:24:45 +0000 (0:00:00.718) 0:00:46.373 ****** 2025-02-04 09:24:45.250260 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:45.250519 | orchestrator | 2025-02-04 09:24:45.250555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:45.250873 | orchestrator | Tuesday 04 February 2025 09:24:45 +0000 (0:00:00.222) 0:00:46.596 ****** 2025-02-04 09:24:45.497352 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:45.498496 | orchestrator | 2025-02-04 09:24:45.498825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:45.500193 | orchestrator | Tuesday 04 February 2025 09:24:45 +0000 (0:00:00.244) 0:00:46.841 ****** 2025-02-04 09:24:45.756533 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:45.757213 | orchestrator | 2025-02-04 09:24:45.758947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:24:45.759066 | orchestrator | Tuesday 04 February 2025 09:24:45 +0000 (0:00:00.260) 0:00:47.101 ****** 2025-02-04 09:24:45.980125 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:45.980522 | orchestrator | 2025-02-04 09:24:45.981482 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-02-04 09:24:45.983023 | orchestrator | Tuesday 04 February 2025 09:24:45 +0000 (0:00:00.222) 0:00:47.324 ****** 2025-02-04 09:24:46.449647 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-02-04 09:24:46.449936 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-02-04 09:24:46.450881 | orchestrator | 2025-02-04 09:24:46.452900 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-02-04 09:24:46.455313 | orchestrator | Tuesday 04 February 2025 09:24:46 +0000 (0:00:00.470) 0:00:47.794 ****** 2025-02-04 09:24:46.587482 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:46.588595 | orchestrator | 2025-02-04 09:24:46.589972 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-02-04 09:24:46.591954 | orchestrator | Tuesday 04 February 2025 09:24:46 +0000 (0:00:00.138) 0:00:47.933 ****** 2025-02-04 09:24:46.761452 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:46.761850 | orchestrator | 2025-02-04 09:24:46.761898 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-02-04 09:24:46.761923 | orchestrator | Tuesday 04 February 2025 09:24:46 +0000 (0:00:00.172) 0:00:48.106 ****** 2025-02-04 09:24:46.903020 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:46.903178 | orchestrator | 2025-02-04 09:24:46.905857 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-02-04 09:24:47.086823 | orchestrator | Tuesday 04 February 2025 09:24:46 +0000 (0:00:00.140) 0:00:48.247 ****** 2025-02-04 09:24:47.086966 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:24:47.319966 | orchestrator | 2025-02-04 09:24:47.320051 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-02-04 09:24:47.320060 | orchestrator | Tuesday 04 February 2025 09:24:47 +0000 (0:00:00.183) 0:00:48.430 ****** 2025-02-04 09:24:47.320081 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}}) 2025-02-04 09:24:47.320115 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '89dbb78a-6e2f-596a-9aad-74f54f8525ce'}}) 2025-02-04 09:24:47.320446 | orchestrator | 2025-02-04 09:24:47.320879 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-02-04 09:24:47.321247 | orchestrator | Tuesday 04 February 2025 09:24:47 +0000 (0:00:00.233) 0:00:48.664 ****** 2025-02-04 09:24:47.489915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}})  2025-02-04 09:24:47.491007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '89dbb78a-6e2f-596a-9aad-74f54f8525ce'}})  2025-02-04 09:24:47.491083 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:47.491781 | orchestrator | 2025-02-04 09:24:47.492475 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-02-04 09:24:47.495029 | orchestrator | Tuesday 04 February 2025 09:24:47 +0000 (0:00:00.171) 0:00:48.836 ****** 2025-02-04 09:24:47.691350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}})  2025-02-04 09:24:47.692154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '89dbb78a-6e2f-596a-9aad-74f54f8525ce'}})  2025-02-04 09:24:47.692846 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:47.693209 | orchestrator | 2025-02-04 09:24:47.693242 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-02-04 09:24:47.693394 | orchestrator | Tuesday 04 February 2025 09:24:47 +0000 (0:00:00.199) 0:00:49.036 ****** 2025-02-04 09:24:47.913926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}})  2025-02-04 09:24:47.914551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '89dbb78a-6e2f-596a-9aad-74f54f8525ce'}})  2025-02-04 09:24:47.915157 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:47.915238 | orchestrator | 2025-02-04 09:24:47.916784 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-02-04 09:24:47.917144 | orchestrator | Tuesday 04 February 2025 09:24:47 +0000 (0:00:00.221) 0:00:49.258 ****** 2025-02-04 09:24:48.065620 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:24:48.065856 | orchestrator | 2025-02-04 09:24:48.066153 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-02-04 09:24:48.066998 | orchestrator | Tuesday 04 February 2025 09:24:48 +0000 (0:00:00.152) 0:00:49.411 ****** 2025-02-04 09:24:48.202556 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:24:48.203042 | orchestrator | 2025-02-04 09:24:48.204057 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-02-04 09:24:48.204545 | orchestrator | Tuesday 04 February 2025 09:24:48 +0000 (0:00:00.137) 0:00:49.548 ****** 2025-02-04 09:24:48.561044 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:48.561245 | orchestrator | 2025-02-04 09:24:48.561968 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-02-04 09:24:48.562226 | orchestrator | Tuesday 04 February 2025 09:24:48 +0000 (0:00:00.359) 0:00:49.907 ****** 2025-02-04 09:24:48.747191 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:48.747407 | orchestrator | 2025-02-04 09:24:48.747443 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-02-04 09:24:48.878252 | orchestrator | Tuesday 04 February 2025 09:24:48 +0000 (0:00:00.184) 0:00:50.092 ****** 2025-02-04 09:24:48.878383 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:48.879126 | orchestrator | 2025-02-04 09:24:48.879912 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-02-04 09:24:48.881068 | orchestrator | Tuesday 04 February 2025 09:24:48 +0000 (0:00:00.131) 0:00:50.223 ****** 2025-02-04 09:24:49.028523 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:24:49.033511 | orchestrator |  "ceph_osd_devices": { 2025-02-04 09:24:49.035212 | orchestrator |  "sdb": { 2025-02-04 09:24:49.035270 | orchestrator |  "osd_lvm_uuid": "25e96ed1-6b8f-57c8-bdd9-51fb1c446a39" 2025-02-04 09:24:49.039814 | orchestrator |  }, 2025-02-04 09:24:49.044064 | orchestrator |  "sdc": { 2025-02-04 09:24:49.047744 | orchestrator |  "osd_lvm_uuid": "89dbb78a-6e2f-596a-9aad-74f54f8525ce" 2025-02-04 09:24:49.047809 | orchestrator |  } 2025-02-04 09:24:49.047837 | orchestrator |  } 2025-02-04 09:24:49.048002 | orchestrator | } 2025-02-04 09:24:49.048465 | orchestrator | 2025-02-04 09:24:49.049448 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-02-04 09:24:49.050511 | orchestrator | Tuesday 04 February 2025 09:24:49 +0000 (0:00:00.148) 0:00:50.371 ****** 2025-02-04 09:24:49.187591 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:49.187945 | orchestrator | 2025-02-04 09:24:49.188314 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-02-04 09:24:49.188361 | orchestrator | Tuesday 04 February 2025 09:24:49 +0000 (0:00:00.162) 0:00:50.534 ****** 2025-02-04 09:24:49.322994 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:49.324567 | orchestrator | 2025-02-04 09:24:49.325331 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-02-04 09:24:49.326703 | orchestrator | Tuesday 04 February 2025 09:24:49 +0000 (0:00:00.134) 0:00:50.668 ****** 2025-02-04 09:24:49.500127 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:24:49.500283 | orchestrator | 2025-02-04 09:24:49.500473 | orchestrator | TASK [Print configuration data] ************************************************ 2025-02-04 09:24:49.501115 | orchestrator | Tuesday 04 February 2025 09:24:49 +0000 (0:00:00.177) 0:00:50.846 ****** 2025-02-04 09:24:49.798092 | orchestrator | changed: [testbed-node-5] => { 2025-02-04 09:24:49.799502 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-02-04 09:24:49.801506 | orchestrator |  "ceph_osd_devices": { 2025-02-04 09:24:49.801638 | orchestrator |  "sdb": { 2025-02-04 09:24:49.803496 | orchestrator |  "osd_lvm_uuid": "25e96ed1-6b8f-57c8-bdd9-51fb1c446a39" 2025-02-04 09:24:49.804016 | orchestrator |  }, 2025-02-04 09:24:49.805292 | orchestrator |  "sdc": { 2025-02-04 09:24:49.805766 | orchestrator |  "osd_lvm_uuid": "89dbb78a-6e2f-596a-9aad-74f54f8525ce" 2025-02-04 09:24:49.806762 | orchestrator |  } 2025-02-04 09:24:49.807469 | orchestrator |  }, 2025-02-04 09:24:49.808072 | orchestrator |  "lvm_volumes": [ 2025-02-04 09:24:49.808401 | orchestrator |  { 2025-02-04 09:24:49.809118 | orchestrator |  "data": "osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39", 2025-02-04 09:24:49.809256 | orchestrator |  "data_vg": "ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39" 2025-02-04 09:24:49.809723 | orchestrator |  }, 2025-02-04 09:24:49.810184 | orchestrator |  { 2025-02-04 09:24:49.810605 | orchestrator |  "data": "osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce", 2025-02-04 09:24:49.810931 | orchestrator |  "data_vg": "ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce" 2025-02-04 09:24:49.811405 | orchestrator |  } 2025-02-04 09:24:49.811767 | orchestrator |  ] 2025-02-04 09:24:49.811931 | orchestrator |  } 2025-02-04 09:24:49.812271 | orchestrator | } 2025-02-04 09:24:49.812760 | orchestrator | 2025-02-04 09:24:49.812988 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-02-04 09:24:49.813182 | orchestrator | Tuesday 04 February 2025 09:24:49 +0000 (0:00:00.297) 0:00:51.143 ****** 2025-02-04 09:24:51.145404 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-04 09:24:51.145578 | orchestrator | 2025-02-04 09:24:51.145604 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:24:51.145635 | orchestrator | 2025-02-04 09:24:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:24:51.145771 | orchestrator | 2025-02-04 09:24:51 | INFO  | Please wait and do not abort execution. 2025-02-04 09:24:51.145890 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-04 09:24:51.146230 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-04 09:24:51.146493 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-02-04 09:24:51.147181 | orchestrator | 2025-02-04 09:24:51.147329 | orchestrator | 2025-02-04 09:24:51.147574 | orchestrator | 2025-02-04 09:24:51.147720 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:24:51.147946 | orchestrator | Tuesday 04 February 2025 09:24:51 +0000 (0:00:01.347) 0:00:52.491 ****** 2025-02-04 09:24:51.148246 | orchestrator | =============================================================================== 2025-02-04 09:24:51.148437 | orchestrator | Write configuration file ------------------------------------------------ 5.65s 2025-02-04 09:24:51.149302 | orchestrator | Add known links to the list of available block devices ------------------ 1.67s 2025-02-04 09:24:51.150113 | orchestrator | Add known partitions to the list of available block devices ------------- 1.54s 2025-02-04 09:24:51.150288 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.40s 2025-02-04 09:24:51.150315 | orchestrator | Print configuration data ------------------------------------------------ 1.10s 2025-02-04 09:24:51.151620 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-02-04 09:24:51.152044 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-02-04 09:24:51.153458 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.94s 2025-02-04 09:24:51.154369 | orchestrator | Get initial list of available block devices ----------------------------- 0.90s 2025-02-04 09:24:51.155052 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-02-04 09:24:51.156010 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-02-04 09:24:51.156785 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.83s 2025-02-04 09:24:51.157406 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-02-04 09:24:51.158172 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.79s 2025-02-04 09:24:51.159487 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-02-04 09:24:51.159727 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.72s 2025-02-04 09:24:51.159766 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-02-04 09:24:51.159890 | orchestrator | Set DB devices config data ---------------------------------------------- 0.71s 2025-02-04 09:24:51.161116 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-02-04 09:24:51.161309 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.69s 2025-02-04 09:24:53.486569 | orchestrator | 2025-02-04 09:24:53 | INFO  | Task 1f180831-d2a7-4d48-8356-75d268ca6ab6 is running in background. Output coming soon. 2025-02-04 09:25:30.802388 | orchestrator | 2025-02-04 09:25:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-02-04 09:25:32.565047 | orchestrator | 2025-02-04 09:25:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-02-04 09:25:32.565125 | orchestrator | 2025-02-04 09:25:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-02-04 09:25:32.565136 | orchestrator | 2025-02-04 09:25:22 | INFO  | Handling group overwrites in 99-overwrite 2025-02-04 09:25:32.565158 | orchestrator | 2025-02-04 09:25:22 | INFO  | Removing group ceph-mds from 50-ceph 2025-02-04 09:25:32.565179 | orchestrator | 2025-02-04 09:25:22 | INFO  | Removing group ceph-rgw from 50-ceph 2025-02-04 09:25:32.565188 | orchestrator | 2025-02-04 09:25:22 | INFO  | Removing group netbird:children from 50-infrastruture 2025-02-04 09:25:32.565196 | orchestrator | 2025-02-04 09:25:22 | INFO  | Removing group storage:children from 50-kolla 2025-02-04 09:25:32.565205 | orchestrator | 2025-02-04 09:25:22 | INFO  | Removing group frr:children from 60-generic 2025-02-04 09:25:32.565214 | orchestrator | 2025-02-04 09:25:22 | INFO  | Handling group overwrites in 20-roles 2025-02-04 09:25:32.565223 | orchestrator | 2025-02-04 09:25:22 | INFO  | Removing group k3s_node from 50-infrastruture 2025-02-04 09:25:32.565249 | orchestrator | 2025-02-04 09:25:22 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-02-04 09:25:32.565258 | orchestrator | 2025-02-04 09:25:30 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-02-04 09:25:32.565281 | orchestrator | 2025-02-04 09:25:32 | INFO  | Task fac1212c-d642-473b-8aac-3e58a3a0471c (ceph-create-lvm-devices) was prepared for execution. 2025-02-04 09:25:36.044197 | orchestrator | 2025-02-04 09:25:32 | INFO  | It takes a moment until task fac1212c-d642-473b-8aac-3e58a3a0471c (ceph-create-lvm-devices) has been started and output is visible here. 2025-02-04 09:25:36.044320 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-04 09:25:36.572980 | orchestrator | 2025-02-04 09:25:36.576121 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-04 09:25:36.577368 | orchestrator | 2025-02-04 09:25:36.577407 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-04 09:25:36.578765 | orchestrator | Tuesday 04 February 2025 09:25:36 +0000 (0:00:00.442) 0:00:00.442 ****** 2025-02-04 09:25:36.825118 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-02-04 09:25:36.825404 | orchestrator | 2025-02-04 09:25:36.826598 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-04 09:25:36.827994 | orchestrator | Tuesday 04 February 2025 09:25:36 +0000 (0:00:00.255) 0:00:00.697 ****** 2025-02-04 09:25:37.060251 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:25:37.061646 | orchestrator | 2025-02-04 09:25:37.064791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:37.911062 | orchestrator | Tuesday 04 February 2025 09:25:37 +0000 (0:00:00.234) 0:00:00.932 ****** 2025-02-04 09:25:37.911166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-02-04 09:25:37.911479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-02-04 09:25:37.911511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-02-04 09:25:37.911919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-02-04 09:25:37.914822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-02-04 09:25:37.915328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-02-04 09:25:37.915355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-02-04 09:25:37.915370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-02-04 09:25:37.915404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-02-04 09:25:37.915425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-02-04 09:25:37.916022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-02-04 09:25:37.916914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-02-04 09:25:37.917133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-02-04 09:25:37.917689 | orchestrator | 2025-02-04 09:25:37.918101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:37.918787 | orchestrator | Tuesday 04 February 2025 09:25:37 +0000 (0:00:00.850) 0:00:01.782 ****** 2025-02-04 09:25:38.121382 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:38.121914 | orchestrator | 2025-02-04 09:25:38.122346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:38.122908 | orchestrator | Tuesday 04 February 2025 09:25:38 +0000 (0:00:00.212) 0:00:01.995 ****** 2025-02-04 09:25:38.324380 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:38.325774 | orchestrator | 2025-02-04 09:25:38.326817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:38.328058 | orchestrator | Tuesday 04 February 2025 09:25:38 +0000 (0:00:00.202) 0:00:02.197 ****** 2025-02-04 09:25:38.540934 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:38.541387 | orchestrator | 2025-02-04 09:25:38.544964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:38.546070 | orchestrator | Tuesday 04 February 2025 09:25:38 +0000 (0:00:00.215) 0:00:02.413 ****** 2025-02-04 09:25:38.754212 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:38.754362 | orchestrator | 2025-02-04 09:25:38.755542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:38.756852 | orchestrator | Tuesday 04 February 2025 09:25:38 +0000 (0:00:00.214) 0:00:02.627 ****** 2025-02-04 09:25:38.956551 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:38.957836 | orchestrator | 2025-02-04 09:25:38.958572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:38.962774 | orchestrator | Tuesday 04 February 2025 09:25:38 +0000 (0:00:00.201) 0:00:02.829 ****** 2025-02-04 09:25:39.188576 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:39.189086 | orchestrator | 2025-02-04 09:25:39.189146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:39.189606 | orchestrator | Tuesday 04 February 2025 09:25:39 +0000 (0:00:00.232) 0:00:03.062 ****** 2025-02-04 09:25:39.380656 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:39.381408 | orchestrator | 2025-02-04 09:25:39.382526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:39.383253 | orchestrator | Tuesday 04 February 2025 09:25:39 +0000 (0:00:00.191) 0:00:03.254 ****** 2025-02-04 09:25:39.589439 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:39.590384 | orchestrator | 2025-02-04 09:25:39.593665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:39.594702 | orchestrator | Tuesday 04 February 2025 09:25:39 +0000 (0:00:00.208) 0:00:03.462 ****** 2025-02-04 09:25:40.402369 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557) 2025-02-04 09:25:40.403442 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557) 2025-02-04 09:25:40.403772 | orchestrator | 2025-02-04 09:25:40.404576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:40.407618 | orchestrator | Tuesday 04 February 2025 09:25:40 +0000 (0:00:00.813) 0:00:04.275 ****** 2025-02-04 09:25:40.846845 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68) 2025-02-04 09:25:40.850506 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68) 2025-02-04 09:25:40.850551 | orchestrator | 2025-02-04 09:25:40.850576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:41.362449 | orchestrator | Tuesday 04 February 2025 09:25:40 +0000 (0:00:00.440) 0:00:04.716 ****** 2025-02-04 09:25:41.362583 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b) 2025-02-04 09:25:41.365061 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b) 2025-02-04 09:25:41.365429 | orchestrator | 2025-02-04 09:25:41.366160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:41.366863 | orchestrator | Tuesday 04 February 2025 09:25:41 +0000 (0:00:00.519) 0:00:05.236 ****** 2025-02-04 09:25:41.885057 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe) 2025-02-04 09:25:41.886177 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe) 2025-02-04 09:25:41.886870 | orchestrator | 2025-02-04 09:25:41.887235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:25:41.888050 | orchestrator | Tuesday 04 February 2025 09:25:41 +0000 (0:00:00.519) 0:00:05.756 ****** 2025-02-04 09:25:42.227051 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-04 09:25:42.227505 | orchestrator | 2025-02-04 09:25:42.228328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:42.230168 | orchestrator | Tuesday 04 February 2025 09:25:42 +0000 (0:00:00.342) 0:00:06.098 ****** 2025-02-04 09:25:42.779044 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-02-04 09:25:42.779265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-02-04 09:25:42.779772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-02-04 09:25:42.780270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-02-04 09:25:42.780887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-02-04 09:25:42.783213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-02-04 09:25:42.784080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-02-04 09:25:42.785328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-02-04 09:25:42.785467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-02-04 09:25:42.786623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-02-04 09:25:42.787476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-02-04 09:25:42.788399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-02-04 09:25:42.789818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-02-04 09:25:42.790105 | orchestrator | 2025-02-04 09:25:42.790720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:42.791484 | orchestrator | Tuesday 04 February 2025 09:25:42 +0000 (0:00:00.551) 0:00:06.650 ****** 2025-02-04 09:25:42.987693 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:42.988081 | orchestrator | 2025-02-04 09:25:42.989076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:42.990781 | orchestrator | Tuesday 04 February 2025 09:25:42 +0000 (0:00:00.209) 0:00:06.859 ****** 2025-02-04 09:25:43.202540 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:43.203016 | orchestrator | 2025-02-04 09:25:43.203055 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:43.203079 | orchestrator | Tuesday 04 February 2025 09:25:43 +0000 (0:00:00.214) 0:00:07.074 ****** 2025-02-04 09:25:43.408399 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:43.408604 | orchestrator | 2025-02-04 09:25:43.409046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:43.410131 | orchestrator | Tuesday 04 February 2025 09:25:43 +0000 (0:00:00.207) 0:00:07.281 ****** 2025-02-04 09:25:43.604243 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:43.604849 | orchestrator | 2025-02-04 09:25:43.609535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:43.610063 | orchestrator | Tuesday 04 February 2025 09:25:43 +0000 (0:00:00.195) 0:00:07.476 ****** 2025-02-04 09:25:44.087202 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:44.087981 | orchestrator | 2025-02-04 09:25:44.088332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:44.088614 | orchestrator | Tuesday 04 February 2025 09:25:44 +0000 (0:00:00.483) 0:00:07.960 ****** 2025-02-04 09:25:44.299664 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:44.307221 | orchestrator | 2025-02-04 09:25:44.309810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:44.309871 | orchestrator | Tuesday 04 February 2025 09:25:44 +0000 (0:00:00.210) 0:00:08.171 ****** 2025-02-04 09:25:44.498723 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:44.499323 | orchestrator | 2025-02-04 09:25:44.500053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:44.503423 | orchestrator | Tuesday 04 February 2025 09:25:44 +0000 (0:00:00.198) 0:00:08.370 ****** 2025-02-04 09:25:44.705856 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:44.707372 | orchestrator | 2025-02-04 09:25:44.708307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:44.710502 | orchestrator | Tuesday 04 February 2025 09:25:44 +0000 (0:00:00.208) 0:00:08.578 ****** 2025-02-04 09:25:45.394623 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-02-04 09:25:45.394860 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-02-04 09:25:45.395055 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-02-04 09:25:45.395711 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-02-04 09:25:45.396080 | orchestrator | 2025-02-04 09:25:45.396620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:45.396986 | orchestrator | Tuesday 04 February 2025 09:25:45 +0000 (0:00:00.688) 0:00:09.267 ****** 2025-02-04 09:25:45.620609 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:45.623942 | orchestrator | 2025-02-04 09:25:45.831757 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:45.831878 | orchestrator | Tuesday 04 February 2025 09:25:45 +0000 (0:00:00.223) 0:00:09.491 ****** 2025-02-04 09:25:45.831915 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:45.832312 | orchestrator | 2025-02-04 09:25:45.832348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:45.832413 | orchestrator | Tuesday 04 February 2025 09:25:45 +0000 (0:00:00.213) 0:00:09.704 ****** 2025-02-04 09:25:46.040924 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:46.041805 | orchestrator | 2025-02-04 09:25:46.041848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:25:46.041877 | orchestrator | Tuesday 04 February 2025 09:25:46 +0000 (0:00:00.208) 0:00:09.913 ****** 2025-02-04 09:25:46.225441 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:46.374530 | orchestrator | 2025-02-04 09:25:46.374647 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-04 09:25:46.374668 | orchestrator | Tuesday 04 February 2025 09:25:46 +0000 (0:00:00.184) 0:00:10.097 ****** 2025-02-04 09:25:46.374753 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:46.375152 | orchestrator | 2025-02-04 09:25:46.375501 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-04 09:25:46.376053 | orchestrator | Tuesday 04 February 2025 09:25:46 +0000 (0:00:00.149) 0:00:10.247 ****** 2025-02-04 09:25:46.800754 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8b56b489-397c-55c4-ba6f-4e97fbbc410a'}}) 2025-02-04 09:25:46.800928 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fd89a215-a86e-5b79-8dd1-0773a21fefe5'}}) 2025-02-04 09:25:46.801036 | orchestrator | 2025-02-04 09:25:46.801370 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-04 09:25:46.801841 | orchestrator | Tuesday 04 February 2025 09:25:46 +0000 (0:00:00.425) 0:00:10.672 ****** 2025-02-04 09:25:48.694352 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'}) 2025-02-04 09:25:48.694552 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'}) 2025-02-04 09:25:48.695501 | orchestrator | 2025-02-04 09:25:48.696329 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-04 09:25:48.877098 | orchestrator | Tuesday 04 February 2025 09:25:48 +0000 (0:00:01.893) 0:00:12.565 ****** 2025-02-04 09:25:48.877248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:48.883391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:48.884279 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:48.884324 | orchestrator | 2025-02-04 09:25:48.886877 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-04 09:25:48.889008 | orchestrator | Tuesday 04 February 2025 09:25:48 +0000 (0:00:00.182) 0:00:12.748 ****** 2025-02-04 09:25:50.358907 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'}) 2025-02-04 09:25:50.359043 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'}) 2025-02-04 09:25:50.359936 | orchestrator | 2025-02-04 09:25:50.359998 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-04 09:25:50.360748 | orchestrator | Tuesday 04 February 2025 09:25:50 +0000 (0:00:01.481) 0:00:14.229 ****** 2025-02-04 09:25:50.552566 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:50.553020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:50.553903 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:50.554639 | orchestrator | 2025-02-04 09:25:50.555405 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-04 09:25:50.556147 | orchestrator | Tuesday 04 February 2025 09:25:50 +0000 (0:00:00.195) 0:00:14.425 ****** 2025-02-04 09:25:50.690931 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:50.691384 | orchestrator | 2025-02-04 09:25:50.691427 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-04 09:25:50.691881 | orchestrator | Tuesday 04 February 2025 09:25:50 +0000 (0:00:00.137) 0:00:14.562 ****** 2025-02-04 09:25:50.871362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:50.872266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:50.873261 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:50.874851 | orchestrator | 2025-02-04 09:25:50.876356 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-04 09:25:50.876788 | orchestrator | Tuesday 04 February 2025 09:25:50 +0000 (0:00:00.180) 0:00:14.743 ****** 2025-02-04 09:25:51.030116 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:51.030299 | orchestrator | 2025-02-04 09:25:51.030331 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-04 09:25:51.030788 | orchestrator | Tuesday 04 February 2025 09:25:51 +0000 (0:00:00.155) 0:00:14.899 ****** 2025-02-04 09:25:51.223110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:51.223293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:51.224011 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:51.225075 | orchestrator | 2025-02-04 09:25:51.225924 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-04 09:25:51.226127 | orchestrator | Tuesday 04 February 2025 09:25:51 +0000 (0:00:00.195) 0:00:15.095 ****** 2025-02-04 09:25:51.576084 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:51.576848 | orchestrator | 2025-02-04 09:25:51.577304 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-04 09:25:51.578117 | orchestrator | Tuesday 04 February 2025 09:25:51 +0000 (0:00:00.354) 0:00:15.449 ****** 2025-02-04 09:25:51.738472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:51.738636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:51.739158 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:51.739382 | orchestrator | 2025-02-04 09:25:51.739882 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-04 09:25:51.740235 | orchestrator | Tuesday 04 February 2025 09:25:51 +0000 (0:00:00.161) 0:00:15.611 ****** 2025-02-04 09:25:51.884544 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:25:51.885077 | orchestrator | 2025-02-04 09:25:51.886361 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-04 09:25:51.886963 | orchestrator | Tuesday 04 February 2025 09:25:51 +0000 (0:00:00.146) 0:00:15.757 ****** 2025-02-04 09:25:52.059381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:52.060167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:52.060215 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:52.061247 | orchestrator | 2025-02-04 09:25:52.064039 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-04 09:25:52.066955 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.175) 0:00:15.932 ****** 2025-02-04 09:25:52.220345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:52.220966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:52.221441 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:52.222217 | orchestrator | 2025-02-04 09:25:52.222362 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-04 09:25:52.224756 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.160) 0:00:16.093 ****** 2025-02-04 09:25:52.391320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:52.392748 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:52.394166 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:52.394910 | orchestrator | 2025-02-04 09:25:52.395585 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-04 09:25:52.396622 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.170) 0:00:16.263 ****** 2025-02-04 09:25:52.541804 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:52.542589 | orchestrator | 2025-02-04 09:25:52.543194 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-04 09:25:52.544106 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.147) 0:00:16.410 ****** 2025-02-04 09:25:52.674495 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:52.674611 | orchestrator | 2025-02-04 09:25:52.674841 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-04 09:25:52.675242 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.136) 0:00:16.547 ****** 2025-02-04 09:25:52.817277 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:52.817619 | orchestrator | 2025-02-04 09:25:52.818285 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-04 09:25:52.818585 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.140) 0:00:16.688 ****** 2025-02-04 09:25:52.956523 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:25:52.957631 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-04 09:25:52.958930 | orchestrator | } 2025-02-04 09:25:52.960049 | orchestrator | 2025-02-04 09:25:52.960429 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-04 09:25:52.961606 | orchestrator | Tuesday 04 February 2025 09:25:52 +0000 (0:00:00.140) 0:00:16.828 ****** 2025-02-04 09:25:53.116204 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:25:53.116577 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-04 09:25:53.118322 | orchestrator | } 2025-02-04 09:25:53.119033 | orchestrator | 2025-02-04 09:25:53.120309 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-04 09:25:53.121329 | orchestrator | Tuesday 04 February 2025 09:25:53 +0000 (0:00:00.158) 0:00:16.988 ****** 2025-02-04 09:25:53.269643 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:25:53.269872 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-04 09:25:53.269917 | orchestrator | } 2025-02-04 09:25:53.269981 | orchestrator | 2025-02-04 09:25:53.270491 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-04 09:25:53.270868 | orchestrator | Tuesday 04 February 2025 09:25:53 +0000 (0:00:00.154) 0:00:17.142 ****** 2025-02-04 09:25:54.186409 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:25:54.188011 | orchestrator | 2025-02-04 09:25:54.661579 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-04 09:25:54.661768 | orchestrator | Tuesday 04 February 2025 09:25:54 +0000 (0:00:00.913) 0:00:18.056 ****** 2025-02-04 09:25:54.661808 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:25:54.662153 | orchestrator | 2025-02-04 09:25:54.662525 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-04 09:25:54.662920 | orchestrator | Tuesday 04 February 2025 09:25:54 +0000 (0:00:00.477) 0:00:18.534 ****** 2025-02-04 09:25:55.139142 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:25:55.139391 | orchestrator | 2025-02-04 09:25:55.140841 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-04 09:25:55.140980 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.475) 0:00:19.009 ****** 2025-02-04 09:25:55.298903 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:25:55.299304 | orchestrator | 2025-02-04 09:25:55.300379 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-04 09:25:55.300892 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.162) 0:00:19.171 ****** 2025-02-04 09:25:55.427792 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:55.428011 | orchestrator | 2025-02-04 09:25:55.428497 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-04 09:25:55.429175 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.128) 0:00:19.300 ****** 2025-02-04 09:25:55.561230 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:55.561854 | orchestrator | 2025-02-04 09:25:55.562113 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-04 09:25:55.562325 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.125) 0:00:19.426 ****** 2025-02-04 09:25:55.691761 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:25:55.691958 | orchestrator |  "vgs_report": { 2025-02-04 09:25:55.691990 | orchestrator |  "vg": [] 2025-02-04 09:25:55.692409 | orchestrator |  } 2025-02-04 09:25:55.692940 | orchestrator | } 2025-02-04 09:25:55.693708 | orchestrator | 2025-02-04 09:25:55.695757 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-04 09:25:55.836921 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.138) 0:00:19.564 ****** 2025-02-04 09:25:55.836964 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:55.838157 | orchestrator | 2025-02-04 09:25:55.840645 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-04 09:25:55.841444 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.144) 0:00:19.709 ****** 2025-02-04 09:25:55.966891 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:55.967436 | orchestrator | 2025-02-04 09:25:55.968050 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-04 09:25:55.968739 | orchestrator | Tuesday 04 February 2025 09:25:55 +0000 (0:00:00.131) 0:00:19.840 ****** 2025-02-04 09:25:56.112992 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:56.114883 | orchestrator | 2025-02-04 09:25:56.115042 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-04 09:25:56.116442 | orchestrator | Tuesday 04 February 2025 09:25:56 +0000 (0:00:00.145) 0:00:19.985 ****** 2025-02-04 09:25:56.277488 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:56.277954 | orchestrator | 2025-02-04 09:25:56.278002 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-04 09:25:56.278091 | orchestrator | Tuesday 04 February 2025 09:25:56 +0000 (0:00:00.161) 0:00:20.147 ****** 2025-02-04 09:25:56.610932 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:56.611572 | orchestrator | 2025-02-04 09:25:56.612928 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-04 09:25:56.613587 | orchestrator | Tuesday 04 February 2025 09:25:56 +0000 (0:00:00.334) 0:00:20.482 ****** 2025-02-04 09:25:56.748536 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:56.749032 | orchestrator | 2025-02-04 09:25:56.749911 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-04 09:25:56.750720 | orchestrator | Tuesday 04 February 2025 09:25:56 +0000 (0:00:00.139) 0:00:20.621 ****** 2025-02-04 09:25:56.899147 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:56.899597 | orchestrator | 2025-02-04 09:25:56.902011 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-04 09:25:57.029648 | orchestrator | Tuesday 04 February 2025 09:25:56 +0000 (0:00:00.149) 0:00:20.770 ****** 2025-02-04 09:25:57.029840 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.030075 | orchestrator | 2025-02-04 09:25:57.030845 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-04 09:25:57.031729 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.131) 0:00:20.902 ****** 2025-02-04 09:25:57.174751 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.176975 | orchestrator | 2025-02-04 09:25:57.177230 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-04 09:25:57.177728 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.145) 0:00:21.047 ****** 2025-02-04 09:25:57.327974 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.328741 | orchestrator | 2025-02-04 09:25:57.328775 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-04 09:25:57.328797 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.149) 0:00:21.197 ****** 2025-02-04 09:25:57.469327 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.470283 | orchestrator | 2025-02-04 09:25:57.470324 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-04 09:25:57.470803 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.145) 0:00:21.342 ****** 2025-02-04 09:25:57.620341 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.620890 | orchestrator | 2025-02-04 09:25:57.621358 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-04 09:25:57.623987 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.150) 0:00:21.492 ****** 2025-02-04 09:25:57.769664 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.769910 | orchestrator | 2025-02-04 09:25:57.769940 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-04 09:25:57.769963 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.144) 0:00:21.637 ****** 2025-02-04 09:25:57.898624 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:57.899163 | orchestrator | 2025-02-04 09:25:57.900351 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-04 09:25:57.901214 | orchestrator | Tuesday 04 February 2025 09:25:57 +0000 (0:00:00.132) 0:00:21.769 ****** 2025-02-04 09:25:58.069185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:58.069811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:58.070161 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:58.073735 | orchestrator | 2025-02-04 09:25:58.073848 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-04 09:25:58.073874 | orchestrator | Tuesday 04 February 2025 09:25:58 +0000 (0:00:00.171) 0:00:21.941 ****** 2025-02-04 09:25:58.233336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:58.234284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:58.235592 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:58.237006 | orchestrator | 2025-02-04 09:25:58.239231 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-04 09:25:58.239803 | orchestrator | Tuesday 04 February 2025 09:25:58 +0000 (0:00:00.162) 0:00:22.104 ****** 2025-02-04 09:25:58.624447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:58.625046 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:58.625850 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:58.626571 | orchestrator | 2025-02-04 09:25:58.627391 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-04 09:25:58.628727 | orchestrator | Tuesday 04 February 2025 09:25:58 +0000 (0:00:00.391) 0:00:22.495 ****** 2025-02-04 09:25:58.804447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:58.805893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:58.806093 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:58.806301 | orchestrator | 2025-02-04 09:25:58.808470 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-04 09:25:58.984448 | orchestrator | Tuesday 04 February 2025 09:25:58 +0000 (0:00:00.180) 0:00:22.676 ****** 2025-02-04 09:25:58.984573 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:58.985216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:58.985730 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:58.985749 | orchestrator | 2025-02-04 09:25:58.985755 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-04 09:25:58.985765 | orchestrator | Tuesday 04 February 2025 09:25:58 +0000 (0:00:00.180) 0:00:22.856 ****** 2025-02-04 09:25:59.185883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:59.187103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:59.188238 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:59.189421 | orchestrator | 2025-02-04 09:25:59.189721 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-04 09:25:59.190661 | orchestrator | Tuesday 04 February 2025 09:25:59 +0000 (0:00:00.200) 0:00:23.057 ****** 2025-02-04 09:25:59.369008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:59.369271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:59.370183 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:59.370555 | orchestrator | 2025-02-04 09:25:59.371284 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-04 09:25:59.372204 | orchestrator | Tuesday 04 February 2025 09:25:59 +0000 (0:00:00.183) 0:00:23.240 ****** 2025-02-04 09:25:59.538544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:25:59.539463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:25:59.541933 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:25:59.542093 | orchestrator | 2025-02-04 09:25:59.542120 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-04 09:25:59.542143 | orchestrator | Tuesday 04 February 2025 09:25:59 +0000 (0:00:00.170) 0:00:23.410 ****** 2025-02-04 09:26:00.052519 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:26:00.052642 | orchestrator | 2025-02-04 09:26:00.052766 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-04 09:26:00.055134 | orchestrator | Tuesday 04 February 2025 09:26:00 +0000 (0:00:00.512) 0:00:23.923 ****** 2025-02-04 09:26:00.568506 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:26:00.568791 | orchestrator | 2025-02-04 09:26:00.568843 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-04 09:26:00.568880 | orchestrator | Tuesday 04 February 2025 09:26:00 +0000 (0:00:00.515) 0:00:24.439 ****** 2025-02-04 09:26:00.721579 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:26:00.722111 | orchestrator | 2025-02-04 09:26:00.722368 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-04 09:26:00.722403 | orchestrator | Tuesday 04 February 2025 09:26:00 +0000 (0:00:00.155) 0:00:24.594 ****** 2025-02-04 09:26:00.940168 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'vg_name': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'}) 2025-02-04 09:26:00.940799 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'vg_name': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'}) 2025-02-04 09:26:00.943093 | orchestrator | 2025-02-04 09:26:01.329599 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-04 09:26:01.329781 | orchestrator | Tuesday 04 February 2025 09:26:00 +0000 (0:00:00.217) 0:00:24.811 ****** 2025-02-04 09:26:01.329820 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:26:01.330128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:26:01.330157 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:26:01.330181 | orchestrator | 2025-02-04 09:26:01.330287 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-04 09:26:01.330991 | orchestrator | Tuesday 04 February 2025 09:26:01 +0000 (0:00:00.390) 0:00:25.202 ****** 2025-02-04 09:26:01.535793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:26:01.535983 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:26:01.536552 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:26:01.537462 | orchestrator | 2025-02-04 09:26:01.538215 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-04 09:26:01.538590 | orchestrator | Tuesday 04 February 2025 09:26:01 +0000 (0:00:00.206) 0:00:25.408 ****** 2025-02-04 09:26:01.710752 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'})  2025-02-04 09:26:01.711090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'})  2025-02-04 09:26:01.711859 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:26:01.714097 | orchestrator | 2025-02-04 09:26:02.432925 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-04 09:26:02.433028 | orchestrator | Tuesday 04 February 2025 09:26:01 +0000 (0:00:00.172) 0:00:25.580 ****** 2025-02-04 09:26:02.433060 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:26:02.434883 | orchestrator |  "lvm_report": { 2025-02-04 09:26:02.435623 | orchestrator |  "lv": [ 2025-02-04 09:26:02.436590 | orchestrator |  { 2025-02-04 09:26:02.437828 | orchestrator |  "lv_name": "osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a", 2025-02-04 09:26:02.438729 | orchestrator |  "vg_name": "ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a" 2025-02-04 09:26:02.439340 | orchestrator |  }, 2025-02-04 09:26:02.439694 | orchestrator |  { 2025-02-04 09:26:02.440168 | orchestrator |  "lv_name": "osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5", 2025-02-04 09:26:02.440872 | orchestrator |  "vg_name": "ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5" 2025-02-04 09:26:02.440971 | orchestrator |  } 2025-02-04 09:26:02.441607 | orchestrator |  ], 2025-02-04 09:26:02.441998 | orchestrator |  "pv": [ 2025-02-04 09:26:02.442406 | orchestrator |  { 2025-02-04 09:26:02.442624 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-04 09:26:02.443385 | orchestrator |  "vg_name": "ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a" 2025-02-04 09:26:02.443751 | orchestrator |  }, 2025-02-04 09:26:02.444219 | orchestrator |  { 2025-02-04 09:26:02.445047 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-04 09:26:02.445882 | orchestrator |  "vg_name": "ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5" 2025-02-04 09:26:02.446143 | orchestrator |  } 2025-02-04 09:26:02.446196 | orchestrator |  ] 2025-02-04 09:26:02.446209 | orchestrator |  } 2025-02-04 09:26:02.446923 | orchestrator | } 2025-02-04 09:26:02.447312 | orchestrator | 2025-02-04 09:26:02.447360 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-04 09:26:02.447846 | orchestrator | 2025-02-04 09:26:02.448184 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-04 09:26:02.448721 | orchestrator | Tuesday 04 February 2025 09:26:02 +0000 (0:00:00.723) 0:00:26.304 ****** 2025-02-04 09:26:03.113334 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-02-04 09:26:03.114941 | orchestrator | 2025-02-04 09:26:03.115775 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-04 09:26:03.116794 | orchestrator | Tuesday 04 February 2025 09:26:03 +0000 (0:00:00.680) 0:00:26.984 ****** 2025-02-04 09:26:03.356842 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:03.357038 | orchestrator | 2025-02-04 09:26:03.357066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:03.359423 | orchestrator | Tuesday 04 February 2025 09:26:03 +0000 (0:00:00.243) 0:00:27.227 ****** 2025-02-04 09:26:03.846377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-02-04 09:26:03.850077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-02-04 09:26:03.852139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-02-04 09:26:03.852175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-02-04 09:26:03.852199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-02-04 09:26:03.853661 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-02-04 09:26:03.854565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-02-04 09:26:03.855724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-02-04 09:26:03.856002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-02-04 09:26:03.856951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-02-04 09:26:03.857279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-02-04 09:26:03.857725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-02-04 09:26:03.858091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-02-04 09:26:03.858477 | orchestrator | 2025-02-04 09:26:03.858976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:03.859528 | orchestrator | Tuesday 04 February 2025 09:26:03 +0000 (0:00:00.490) 0:00:27.717 ****** 2025-02-04 09:26:04.091138 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:04.095626 | orchestrator | 2025-02-04 09:26:04.095820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:04.096408 | orchestrator | Tuesday 04 February 2025 09:26:04 +0000 (0:00:00.243) 0:00:27.961 ****** 2025-02-04 09:26:04.325371 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:04.325540 | orchestrator | 2025-02-04 09:26:04.546510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:04.546663 | orchestrator | Tuesday 04 February 2025 09:26:04 +0000 (0:00:00.236) 0:00:28.198 ****** 2025-02-04 09:26:04.546754 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:04.546873 | orchestrator | 2025-02-04 09:26:04.546917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:04.786912 | orchestrator | Tuesday 04 February 2025 09:26:04 +0000 (0:00:00.220) 0:00:28.418 ****** 2025-02-04 09:26:04.786986 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:04.787755 | orchestrator | 2025-02-04 09:26:04.788418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:04.788809 | orchestrator | Tuesday 04 February 2025 09:26:04 +0000 (0:00:00.240) 0:00:28.659 ****** 2025-02-04 09:26:04.976188 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:04.977070 | orchestrator | 2025-02-04 09:26:04.977509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:04.978206 | orchestrator | Tuesday 04 February 2025 09:26:04 +0000 (0:00:00.189) 0:00:28.849 ****** 2025-02-04 09:26:05.172374 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:05.172833 | orchestrator | 2025-02-04 09:26:05.172860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:05.173085 | orchestrator | Tuesday 04 February 2025 09:26:05 +0000 (0:00:00.194) 0:00:29.043 ****** 2025-02-04 09:26:05.383783 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:05.384181 | orchestrator | 2025-02-04 09:26:05.384781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:05.385215 | orchestrator | Tuesday 04 February 2025 09:26:05 +0000 (0:00:00.212) 0:00:29.255 ****** 2025-02-04 09:26:06.029067 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:06.029720 | orchestrator | 2025-02-04 09:26:06.030409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:06.031375 | orchestrator | Tuesday 04 February 2025 09:26:06 +0000 (0:00:00.644) 0:00:29.900 ****** 2025-02-04 09:26:06.494234 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863) 2025-02-04 09:26:06.494447 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863) 2025-02-04 09:26:06.495093 | orchestrator | 2025-02-04 09:26:06.495835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:06.496736 | orchestrator | Tuesday 04 February 2025 09:26:06 +0000 (0:00:00.466) 0:00:30.367 ****** 2025-02-04 09:26:06.942548 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8) 2025-02-04 09:26:06.943132 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8) 2025-02-04 09:26:06.943954 | orchestrator | 2025-02-04 09:26:06.944115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:06.944219 | orchestrator | Tuesday 04 February 2025 09:26:06 +0000 (0:00:00.448) 0:00:30.815 ****** 2025-02-04 09:26:07.392847 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae) 2025-02-04 09:26:07.396694 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae) 2025-02-04 09:26:07.396772 | orchestrator | 2025-02-04 09:26:07.842898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:07.842976 | orchestrator | Tuesday 04 February 2025 09:26:07 +0000 (0:00:00.448) 0:00:31.264 ****** 2025-02-04 09:26:07.843007 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40) 2025-02-04 09:26:07.844195 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40) 2025-02-04 09:26:07.846846 | orchestrator | 2025-02-04 09:26:08.181293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:08.181397 | orchestrator | Tuesday 04 February 2025 09:26:07 +0000 (0:00:00.450) 0:00:31.715 ****** 2025-02-04 09:26:08.181424 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-04 09:26:08.181514 | orchestrator | 2025-02-04 09:26:08.182438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:08.183067 | orchestrator | Tuesday 04 February 2025 09:26:08 +0000 (0:00:00.337) 0:00:32.052 ****** 2025-02-04 09:26:08.658387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-02-04 09:26:08.660057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-02-04 09:26:08.660101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-02-04 09:26:08.661149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-02-04 09:26:08.662399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-02-04 09:26:08.664375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-02-04 09:26:08.664579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-02-04 09:26:08.665133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-02-04 09:26:08.665615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-02-04 09:26:08.666408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-02-04 09:26:08.666721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-02-04 09:26:08.666938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-02-04 09:26:08.667377 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-02-04 09:26:08.667596 | orchestrator | 2025-02-04 09:26:08.668362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:08.669559 | orchestrator | Tuesday 04 February 2025 09:26:08 +0000 (0:00:00.478) 0:00:32.531 ****** 2025-02-04 09:26:08.883335 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:08.883803 | orchestrator | 2025-02-04 09:26:08.884994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:08.885181 | orchestrator | Tuesday 04 February 2025 09:26:08 +0000 (0:00:00.225) 0:00:32.756 ****** 2025-02-04 09:26:09.083297 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:09.083803 | orchestrator | 2025-02-04 09:26:09.084571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:09.086887 | orchestrator | Tuesday 04 February 2025 09:26:09 +0000 (0:00:00.198) 0:00:32.955 ****** 2025-02-04 09:26:09.738247 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:09.738782 | orchestrator | 2025-02-04 09:26:09.738829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:09.739446 | orchestrator | Tuesday 04 February 2025 09:26:09 +0000 (0:00:00.655) 0:00:33.610 ****** 2025-02-04 09:26:09.986262 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:09.986901 | orchestrator | 2025-02-04 09:26:09.986998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:09.987546 | orchestrator | Tuesday 04 February 2025 09:26:09 +0000 (0:00:00.248) 0:00:33.858 ****** 2025-02-04 09:26:10.185596 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:10.185937 | orchestrator | 2025-02-04 09:26:10.185975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:10.186879 | orchestrator | Tuesday 04 February 2025 09:26:10 +0000 (0:00:00.197) 0:00:34.056 ****** 2025-02-04 09:26:10.391752 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:10.393318 | orchestrator | 2025-02-04 09:26:10.394148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:10.395409 | orchestrator | Tuesday 04 February 2025 09:26:10 +0000 (0:00:00.207) 0:00:34.263 ****** 2025-02-04 09:26:10.629882 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:10.630121 | orchestrator | 2025-02-04 09:26:10.630741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:10.631313 | orchestrator | Tuesday 04 February 2025 09:26:10 +0000 (0:00:00.239) 0:00:34.503 ****** 2025-02-04 09:26:10.827653 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:10.828849 | orchestrator | 2025-02-04 09:26:10.829586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:10.830629 | orchestrator | Tuesday 04 February 2025 09:26:10 +0000 (0:00:00.197) 0:00:34.700 ****** 2025-02-04 09:26:11.504450 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-02-04 09:26:11.507054 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-02-04 09:26:11.754728 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-02-04 09:26:11.754860 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-02-04 09:26:11.754875 | orchestrator | 2025-02-04 09:26:11.754887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:11.754899 | orchestrator | Tuesday 04 February 2025 09:26:11 +0000 (0:00:00.674) 0:00:35.374 ****** 2025-02-04 09:26:11.754923 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:11.754983 | orchestrator | 2025-02-04 09:26:11.755147 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:11.755747 | orchestrator | Tuesday 04 February 2025 09:26:11 +0000 (0:00:00.251) 0:00:35.625 ****** 2025-02-04 09:26:11.998224 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:11.998387 | orchestrator | 2025-02-04 09:26:11.998892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:11.999102 | orchestrator | Tuesday 04 February 2025 09:26:11 +0000 (0:00:00.245) 0:00:35.871 ****** 2025-02-04 09:26:12.201429 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:12.871579 | orchestrator | 2025-02-04 09:26:12.871755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:12.871777 | orchestrator | Tuesday 04 February 2025 09:26:12 +0000 (0:00:00.198) 0:00:36.069 ****** 2025-02-04 09:26:12.871809 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:12.871878 | orchestrator | 2025-02-04 09:26:12.872107 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-04 09:26:12.872958 | orchestrator | Tuesday 04 February 2025 09:26:12 +0000 (0:00:00.674) 0:00:36.743 ****** 2025-02-04 09:26:13.075074 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:13.076369 | orchestrator | 2025-02-04 09:26:13.077181 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-04 09:26:13.077978 | orchestrator | Tuesday 04 February 2025 09:26:13 +0000 (0:00:00.202) 0:00:36.946 ****** 2025-02-04 09:26:13.309529 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a9a0f878-ef24-53af-8bd4-10a12036221e'}}) 2025-02-04 09:26:13.309740 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '857e455f-002b-509a-b66d-9c4a1025daeb'}}) 2025-02-04 09:26:13.309772 | orchestrator | 2025-02-04 09:26:13.310346 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-04 09:26:13.311953 | orchestrator | Tuesday 04 February 2025 09:26:13 +0000 (0:00:00.234) 0:00:37.180 ****** 2025-02-04 09:26:15.181109 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'}) 2025-02-04 09:26:15.182276 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'}) 2025-02-04 09:26:15.186251 | orchestrator | 2025-02-04 09:26:15.391099 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-04 09:26:15.391220 | orchestrator | Tuesday 04 February 2025 09:26:15 +0000 (0:00:01.871) 0:00:39.052 ****** 2025-02-04 09:26:15.391257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:16.654831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:16.654970 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:16.654999 | orchestrator | 2025-02-04 09:26:16.655023 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-04 09:26:16.655098 | orchestrator | Tuesday 04 February 2025 09:26:15 +0000 (0:00:00.209) 0:00:39.262 ****** 2025-02-04 09:26:16.655144 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'}) 2025-02-04 09:26:16.655267 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'}) 2025-02-04 09:26:16.656215 | orchestrator | 2025-02-04 09:26:16.656293 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-04 09:26:16.834777 | orchestrator | Tuesday 04 February 2025 09:26:16 +0000 (0:00:01.263) 0:00:40.526 ****** 2025-02-04 09:26:16.834957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:16.835875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:16.837101 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:16.838489 | orchestrator | 2025-02-04 09:26:16.839207 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-04 09:26:16.839965 | orchestrator | Tuesday 04 February 2025 09:26:16 +0000 (0:00:00.180) 0:00:40.706 ****** 2025-02-04 09:26:16.988047 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:17.151395 | orchestrator | 2025-02-04 09:26:17.151504 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-04 09:26:17.151521 | orchestrator | Tuesday 04 February 2025 09:26:16 +0000 (0:00:00.149) 0:00:40.855 ****** 2025-02-04 09:26:17.151549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:17.151636 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:17.151659 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:17.151758 | orchestrator | 2025-02-04 09:26:17.152057 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-04 09:26:17.152311 | orchestrator | Tuesday 04 February 2025 09:26:17 +0000 (0:00:00.169) 0:00:41.024 ****** 2025-02-04 09:26:17.504816 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:17.505025 | orchestrator | 2025-02-04 09:26:17.505714 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-04 09:26:17.505999 | orchestrator | Tuesday 04 February 2025 09:26:17 +0000 (0:00:00.350) 0:00:41.375 ****** 2025-02-04 09:26:17.698167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:17.698349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:17.698993 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:17.699023 | orchestrator | 2025-02-04 09:26:17.699463 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-04 09:26:17.699745 | orchestrator | Tuesday 04 February 2025 09:26:17 +0000 (0:00:00.194) 0:00:41.570 ****** 2025-02-04 09:26:17.858413 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:17.858590 | orchestrator | 2025-02-04 09:26:18.049402 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-04 09:26:18.049522 | orchestrator | Tuesday 04 February 2025 09:26:17 +0000 (0:00:00.160) 0:00:41.730 ****** 2025-02-04 09:26:18.049559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:18.050136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:18.051421 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:18.052282 | orchestrator | 2025-02-04 09:26:18.052980 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-04 09:26:18.053835 | orchestrator | Tuesday 04 February 2025 09:26:18 +0000 (0:00:00.190) 0:00:41.921 ****** 2025-02-04 09:26:18.200835 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:18.401141 | orchestrator | 2025-02-04 09:26:18.401231 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-04 09:26:18.401249 | orchestrator | Tuesday 04 February 2025 09:26:18 +0000 (0:00:00.150) 0:00:42.072 ****** 2025-02-04 09:26:18.401277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:18.401403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:18.402214 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:18.402313 | orchestrator | 2025-02-04 09:26:18.402906 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-04 09:26:18.406884 | orchestrator | Tuesday 04 February 2025 09:26:18 +0000 (0:00:00.200) 0:00:42.272 ****** 2025-02-04 09:26:18.603076 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:18.612277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:18.614158 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:18.616427 | orchestrator | 2025-02-04 09:26:18.618636 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-04 09:26:18.618678 | orchestrator | Tuesday 04 February 2025 09:26:18 +0000 (0:00:00.199) 0:00:42.472 ****** 2025-02-04 09:26:18.856477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:18.858188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:18.859830 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:18.861059 | orchestrator | 2025-02-04 09:26:18.861854 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-04 09:26:18.862854 | orchestrator | Tuesday 04 February 2025 09:26:18 +0000 (0:00:00.255) 0:00:42.727 ****** 2025-02-04 09:26:19.009312 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:19.009954 | orchestrator | 2025-02-04 09:26:19.011018 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-04 09:26:19.013403 | orchestrator | Tuesday 04 February 2025 09:26:18 +0000 (0:00:00.146) 0:00:42.874 ****** 2025-02-04 09:26:19.152766 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:19.153665 | orchestrator | 2025-02-04 09:26:19.153720 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-04 09:26:19.155224 | orchestrator | Tuesday 04 February 2025 09:26:19 +0000 (0:00:00.151) 0:00:43.026 ****** 2025-02-04 09:26:19.304078 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:19.309894 | orchestrator | 2025-02-04 09:26:19.312782 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-04 09:26:19.313906 | orchestrator | Tuesday 04 February 2025 09:26:19 +0000 (0:00:00.148) 0:00:43.174 ****** 2025-02-04 09:26:19.456053 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:26:19.456215 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-04 09:26:19.457112 | orchestrator | } 2025-02-04 09:26:19.457404 | orchestrator | 2025-02-04 09:26:19.458204 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-04 09:26:19.458932 | orchestrator | Tuesday 04 February 2025 09:26:19 +0000 (0:00:00.154) 0:00:43.329 ****** 2025-02-04 09:26:19.818522 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:26:19.818965 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-04 09:26:19.819624 | orchestrator | } 2025-02-04 09:26:19.821056 | orchestrator | 2025-02-04 09:26:19.821894 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-04 09:26:19.823064 | orchestrator | Tuesday 04 February 2025 09:26:19 +0000 (0:00:00.362) 0:00:43.692 ****** 2025-02-04 09:26:19.958360 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:26:19.959043 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-04 09:26:19.960044 | orchestrator | } 2025-02-04 09:26:19.960983 | orchestrator | 2025-02-04 09:26:19.961771 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-04 09:26:19.962424 | orchestrator | Tuesday 04 February 2025 09:26:19 +0000 (0:00:00.139) 0:00:43.831 ****** 2025-02-04 09:26:20.491760 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:20.491999 | orchestrator | 2025-02-04 09:26:20.492037 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-04 09:26:20.492125 | orchestrator | Tuesday 04 February 2025 09:26:20 +0000 (0:00:00.531) 0:00:44.363 ****** 2025-02-04 09:26:21.009604 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:21.009801 | orchestrator | 2025-02-04 09:26:21.010888 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-04 09:26:21.011768 | orchestrator | Tuesday 04 February 2025 09:26:21 +0000 (0:00:00.518) 0:00:44.881 ****** 2025-02-04 09:26:21.575416 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:21.579244 | orchestrator | 2025-02-04 09:26:21.580414 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-04 09:26:21.580980 | orchestrator | Tuesday 04 February 2025 09:26:21 +0000 (0:00:00.563) 0:00:45.445 ****** 2025-02-04 09:26:21.726229 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:21.727617 | orchestrator | 2025-02-04 09:26:21.727710 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-04 09:26:21.732630 | orchestrator | Tuesday 04 February 2025 09:26:21 +0000 (0:00:00.153) 0:00:45.598 ****** 2025-02-04 09:26:21.852919 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:21.858559 | orchestrator | 2025-02-04 09:26:21.859078 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-04 09:26:21.859504 | orchestrator | Tuesday 04 February 2025 09:26:21 +0000 (0:00:00.127) 0:00:45.725 ****** 2025-02-04 09:26:21.977157 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:21.978160 | orchestrator | 2025-02-04 09:26:21.978198 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-04 09:26:21.980416 | orchestrator | Tuesday 04 February 2025 09:26:21 +0000 (0:00:00.123) 0:00:45.849 ****** 2025-02-04 09:26:22.135065 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:26:22.136061 | orchestrator |  "vgs_report": { 2025-02-04 09:26:22.137137 | orchestrator |  "vg": [] 2025-02-04 09:26:22.274821 | orchestrator |  } 2025-02-04 09:26:22.274934 | orchestrator | } 2025-02-04 09:26:22.274974 | orchestrator | 2025-02-04 09:26:22.274991 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-04 09:26:22.275007 | orchestrator | Tuesday 04 February 2025 09:26:22 +0000 (0:00:00.157) 0:00:46.007 ****** 2025-02-04 09:26:22.275037 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:22.275146 | orchestrator | 2025-02-04 09:26:22.276173 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-04 09:26:22.276336 | orchestrator | Tuesday 04 February 2025 09:26:22 +0000 (0:00:00.141) 0:00:46.148 ****** 2025-02-04 09:26:22.433794 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:22.434577 | orchestrator | 2025-02-04 09:26:22.435285 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-04 09:26:22.435973 | orchestrator | Tuesday 04 February 2025 09:26:22 +0000 (0:00:00.157) 0:00:46.306 ****** 2025-02-04 09:26:22.792783 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:22.795122 | orchestrator | 2025-02-04 09:26:22.796772 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-04 09:26:22.798757 | orchestrator | Tuesday 04 February 2025 09:26:22 +0000 (0:00:00.359) 0:00:46.665 ****** 2025-02-04 09:26:22.957100 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:22.957736 | orchestrator | 2025-02-04 09:26:22.957837 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-04 09:26:22.957868 | orchestrator | Tuesday 04 February 2025 09:26:22 +0000 (0:00:00.158) 0:00:46.823 ****** 2025-02-04 09:26:23.127327 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:23.127513 | orchestrator | 2025-02-04 09:26:23.127540 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-04 09:26:23.128342 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.174) 0:00:46.998 ****** 2025-02-04 09:26:23.271528 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:23.272104 | orchestrator | 2025-02-04 09:26:23.275860 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-04 09:26:23.423049 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.143) 0:00:47.142 ****** 2025-02-04 09:26:23.423209 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:23.427036 | orchestrator | 2025-02-04 09:26:23.561800 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-04 09:26:23.561913 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.151) 0:00:47.294 ****** 2025-02-04 09:26:23.561945 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:23.562491 | orchestrator | 2025-02-04 09:26:23.565803 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-04 09:26:23.697091 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.139) 0:00:47.433 ****** 2025-02-04 09:26:23.697235 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:23.697925 | orchestrator | 2025-02-04 09:26:23.699273 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-04 09:26:23.699746 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.136) 0:00:47.570 ****** 2025-02-04 09:26:23.848354 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:23.849345 | orchestrator | 2025-02-04 09:26:23.852723 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-04 09:26:24.002321 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.150) 0:00:47.720 ****** 2025-02-04 09:26:24.002449 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:24.003716 | orchestrator | 2025-02-04 09:26:24.005939 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-04 09:26:24.006440 | orchestrator | Tuesday 04 February 2025 09:26:23 +0000 (0:00:00.153) 0:00:47.874 ****** 2025-02-04 09:26:24.146144 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:24.147181 | orchestrator | 2025-02-04 09:26:24.148027 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-04 09:26:24.148780 | orchestrator | Tuesday 04 February 2025 09:26:24 +0000 (0:00:00.143) 0:00:48.017 ****** 2025-02-04 09:26:24.289130 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:24.290236 | orchestrator | 2025-02-04 09:26:24.291027 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-04 09:26:24.293768 | orchestrator | Tuesday 04 February 2025 09:26:24 +0000 (0:00:00.144) 0:00:48.162 ****** 2025-02-04 09:26:24.440059 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:24.440779 | orchestrator | 2025-02-04 09:26:24.443522 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-04 09:26:24.828838 | orchestrator | Tuesday 04 February 2025 09:26:24 +0000 (0:00:00.149) 0:00:48.311 ****** 2025-02-04 09:26:24.828981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:24.829132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:24.829231 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:24.829600 | orchestrator | 2025-02-04 09:26:24.829792 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-04 09:26:24.829823 | orchestrator | Tuesday 04 February 2025 09:26:24 +0000 (0:00:00.390) 0:00:48.702 ****** 2025-02-04 09:26:25.008969 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:25.009381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:25.009415 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:25.009459 | orchestrator | 2025-02-04 09:26:25.010538 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-04 09:26:25.011593 | orchestrator | Tuesday 04 February 2025 09:26:25 +0000 (0:00:00.180) 0:00:48.882 ****** 2025-02-04 09:26:25.164648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:25.165965 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:25.166663 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:25.168604 | orchestrator | 2025-02-04 09:26:25.345317 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-04 09:26:25.345369 | orchestrator | Tuesday 04 February 2025 09:26:25 +0000 (0:00:00.155) 0:00:49.037 ****** 2025-02-04 09:26:25.345495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:25.346159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:25.346923 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:25.347610 | orchestrator | 2025-02-04 09:26:25.350743 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-04 09:26:25.511906 | orchestrator | Tuesday 04 February 2025 09:26:25 +0000 (0:00:00.178) 0:00:49.215 ****** 2025-02-04 09:26:25.512047 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:25.512792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:25.513249 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:25.514568 | orchestrator | 2025-02-04 09:26:25.514794 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-04 09:26:25.515507 | orchestrator | Tuesday 04 February 2025 09:26:25 +0000 (0:00:00.167) 0:00:49.383 ****** 2025-02-04 09:26:25.679132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:25.680238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:25.681138 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:25.682762 | orchestrator | 2025-02-04 09:26:25.684035 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-04 09:26:25.684220 | orchestrator | Tuesday 04 February 2025 09:26:25 +0000 (0:00:00.167) 0:00:49.551 ****** 2025-02-04 09:26:25.856490 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:25.856844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:25.857569 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:25.858496 | orchestrator | 2025-02-04 09:26:25.859239 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-04 09:26:25.859483 | orchestrator | Tuesday 04 February 2025 09:26:25 +0000 (0:00:00.176) 0:00:49.728 ****** 2025-02-04 09:26:26.060901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:26.061608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:26.062440 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:26.063054 | orchestrator | 2025-02-04 09:26:26.063956 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-04 09:26:26.064268 | orchestrator | Tuesday 04 February 2025 09:26:26 +0000 (0:00:00.205) 0:00:49.933 ****** 2025-02-04 09:26:26.616781 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:26.617245 | orchestrator | 2025-02-04 09:26:26.617869 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-04 09:26:26.618386 | orchestrator | Tuesday 04 February 2025 09:26:26 +0000 (0:00:00.555) 0:00:50.488 ****** 2025-02-04 09:26:27.133347 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:27.134073 | orchestrator | 2025-02-04 09:26:27.135092 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-04 09:26:27.135498 | orchestrator | Tuesday 04 February 2025 09:26:27 +0000 (0:00:00.516) 0:00:51.005 ****** 2025-02-04 09:26:27.504505 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:26:27.504738 | orchestrator | 2025-02-04 09:26:27.506547 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-04 09:26:27.694480 | orchestrator | Tuesday 04 February 2025 09:26:27 +0000 (0:00:00.371) 0:00:51.376 ****** 2025-02-04 09:26:27.694622 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'vg_name': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'}) 2025-02-04 09:26:27.694735 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'vg_name': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'}) 2025-02-04 09:26:27.697259 | orchestrator | 2025-02-04 09:26:27.698096 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-04 09:26:27.698640 | orchestrator | Tuesday 04 February 2025 09:26:27 +0000 (0:00:00.188) 0:00:51.565 ****** 2025-02-04 09:26:27.865074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:27.865519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:27.865862 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:27.866121 | orchestrator | 2025-02-04 09:26:27.866676 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-04 09:26:27.867055 | orchestrator | Tuesday 04 February 2025 09:26:27 +0000 (0:00:00.173) 0:00:51.738 ****** 2025-02-04 09:26:28.042365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:28.042777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:28.047812 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:28.051859 | orchestrator | 2025-02-04 09:26:28.052679 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-04 09:26:28.053132 | orchestrator | Tuesday 04 February 2025 09:26:28 +0000 (0:00:00.175) 0:00:51.913 ****** 2025-02-04 09:26:28.232990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'})  2025-02-04 09:26:28.233744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'})  2025-02-04 09:26:28.234861 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:26:28.237309 | orchestrator | 2025-02-04 09:26:29.186879 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-04 09:26:29.187004 | orchestrator | Tuesday 04 February 2025 09:26:28 +0000 (0:00:00.191) 0:00:52.105 ****** 2025-02-04 09:26:29.187048 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:26:29.187141 | orchestrator |  "lvm_report": { 2025-02-04 09:26:29.187171 | orchestrator |  "lv": [ 2025-02-04 09:26:29.187195 | orchestrator |  { 2025-02-04 09:26:29.187226 | orchestrator |  "lv_name": "osd-block-857e455f-002b-509a-b66d-9c4a1025daeb", 2025-02-04 09:26:29.188083 | orchestrator |  "vg_name": "ceph-857e455f-002b-509a-b66d-9c4a1025daeb" 2025-02-04 09:26:29.188239 | orchestrator |  }, 2025-02-04 09:26:29.189407 | orchestrator |  { 2025-02-04 09:26:29.189588 | orchestrator |  "lv_name": "osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e", 2025-02-04 09:26:29.189791 | orchestrator |  "vg_name": "ceph-a9a0f878-ef24-53af-8bd4-10a12036221e" 2025-02-04 09:26:29.190708 | orchestrator |  } 2025-02-04 09:26:29.190915 | orchestrator |  ], 2025-02-04 09:26:29.191933 | orchestrator |  "pv": [ 2025-02-04 09:26:29.192942 | orchestrator |  { 2025-02-04 09:26:29.192973 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-04 09:26:29.193410 | orchestrator |  "vg_name": "ceph-a9a0f878-ef24-53af-8bd4-10a12036221e" 2025-02-04 09:26:29.194174 | orchestrator |  }, 2025-02-04 09:26:29.194409 | orchestrator |  { 2025-02-04 09:26:29.195171 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-04 09:26:29.195441 | orchestrator |  "vg_name": "ceph-857e455f-002b-509a-b66d-9c4a1025daeb" 2025-02-04 09:26:29.196766 | orchestrator |  } 2025-02-04 09:26:29.198287 | orchestrator |  ] 2025-02-04 09:26:29.198819 | orchestrator |  } 2025-02-04 09:26:29.199505 | orchestrator | } 2025-02-04 09:26:29.199969 | orchestrator | 2025-02-04 09:26:29.200473 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-02-04 09:26:29.200818 | orchestrator | 2025-02-04 09:26:29.201121 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-02-04 09:26:29.201569 | orchestrator | Tuesday 04 February 2025 09:26:29 +0000 (0:00:00.952) 0:00:53.057 ****** 2025-02-04 09:26:29.450460 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-02-04 09:26:29.450850 | orchestrator | 2025-02-04 09:26:29.451131 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-02-04 09:26:29.451777 | orchestrator | Tuesday 04 February 2025 09:26:29 +0000 (0:00:00.265) 0:00:53.323 ****** 2025-02-04 09:26:29.692470 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:29.692679 | orchestrator | 2025-02-04 09:26:29.692745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:29.693935 | orchestrator | Tuesday 04 February 2025 09:26:29 +0000 (0:00:00.240) 0:00:53.564 ****** 2025-02-04 09:26:30.160657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-02-04 09:26:30.160865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-02-04 09:26:30.161852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-02-04 09:26:30.162199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-02-04 09:26:30.162959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-02-04 09:26:30.163443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-02-04 09:26:30.164613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-02-04 09:26:30.165149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-02-04 09:26:30.167356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-02-04 09:26:30.167747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-02-04 09:26:30.169089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-02-04 09:26:30.169442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-02-04 09:26:30.169743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-02-04 09:26:30.171181 | orchestrator | 2025-02-04 09:26:30.172497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:30.174199 | orchestrator | Tuesday 04 February 2025 09:26:30 +0000 (0:00:00.469) 0:00:54.033 ****** 2025-02-04 09:26:30.375762 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:30.377129 | orchestrator | 2025-02-04 09:26:30.377969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:30.378882 | orchestrator | Tuesday 04 February 2025 09:26:30 +0000 (0:00:00.214) 0:00:54.247 ****** 2025-02-04 09:26:30.602405 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:30.602805 | orchestrator | 2025-02-04 09:26:30.605652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:30.606655 | orchestrator | Tuesday 04 February 2025 09:26:30 +0000 (0:00:00.226) 0:00:54.474 ****** 2025-02-04 09:26:30.822145 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:30.822847 | orchestrator | 2025-02-04 09:26:30.823221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:30.824800 | orchestrator | Tuesday 04 February 2025 09:26:30 +0000 (0:00:00.220) 0:00:54.694 ****** 2025-02-04 09:26:31.019614 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:31.209306 | orchestrator | 2025-02-04 09:26:31.209458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:31.209480 | orchestrator | Tuesday 04 February 2025 09:26:31 +0000 (0:00:00.197) 0:00:54.891 ****** 2025-02-04 09:26:31.209513 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:31.209591 | orchestrator | 2025-02-04 09:26:31.209612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:31.795537 | orchestrator | Tuesday 04 February 2025 09:26:31 +0000 (0:00:00.190) 0:00:55.082 ****** 2025-02-04 09:26:31.795790 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:31.795881 | orchestrator | 2025-02-04 09:26:31.795901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:31.795920 | orchestrator | Tuesday 04 February 2025 09:26:31 +0000 (0:00:00.585) 0:00:55.667 ****** 2025-02-04 09:26:32.031424 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:32.031888 | orchestrator | 2025-02-04 09:26:32.032407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:32.032834 | orchestrator | Tuesday 04 February 2025 09:26:32 +0000 (0:00:00.237) 0:00:55.905 ****** 2025-02-04 09:26:32.258203 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:32.259230 | orchestrator | 2025-02-04 09:26:32.259854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:32.262828 | orchestrator | Tuesday 04 February 2025 09:26:32 +0000 (0:00:00.225) 0:00:56.130 ****** 2025-02-04 09:26:32.721570 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc) 2025-02-04 09:26:32.722366 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc) 2025-02-04 09:26:32.722476 | orchestrator | 2025-02-04 09:26:32.723905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:32.724732 | orchestrator | Tuesday 04 February 2025 09:26:32 +0000 (0:00:00.462) 0:00:56.593 ****** 2025-02-04 09:26:33.149654 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a) 2025-02-04 09:26:33.150825 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a) 2025-02-04 09:26:33.151768 | orchestrator | 2025-02-04 09:26:33.153337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:33.154159 | orchestrator | Tuesday 04 February 2025 09:26:33 +0000 (0:00:00.428) 0:00:57.021 ****** 2025-02-04 09:26:33.595256 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa) 2025-02-04 09:26:33.596001 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa) 2025-02-04 09:26:33.596892 | orchestrator | 2025-02-04 09:26:33.597931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:33.600421 | orchestrator | Tuesday 04 February 2025 09:26:33 +0000 (0:00:00.445) 0:00:57.467 ****** 2025-02-04 09:26:34.037314 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543) 2025-02-04 09:26:34.038241 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543) 2025-02-04 09:26:34.038781 | orchestrator | 2025-02-04 09:26:34.039477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-02-04 09:26:34.040101 | orchestrator | Tuesday 04 February 2025 09:26:34 +0000 (0:00:00.442) 0:00:57.910 ****** 2025-02-04 09:26:34.405991 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-02-04 09:26:34.411159 | orchestrator | 2025-02-04 09:26:34.895588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:34.895756 | orchestrator | Tuesday 04 February 2025 09:26:34 +0000 (0:00:00.366) 0:00:58.276 ****** 2025-02-04 09:26:34.895796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-02-04 09:26:34.895872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-02-04 09:26:34.898286 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-02-04 09:26:34.898431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-02-04 09:26:34.898453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-02-04 09:26:34.898472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-02-04 09:26:34.899439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-02-04 09:26:34.899745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-02-04 09:26:34.900343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-02-04 09:26:34.900888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-02-04 09:26:34.901288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-02-04 09:26:34.902419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-02-04 09:26:34.902935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-02-04 09:26:34.902999 | orchestrator | 2025-02-04 09:26:34.903183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:34.903629 | orchestrator | Tuesday 04 February 2025 09:26:34 +0000 (0:00:00.489) 0:00:58.766 ****** 2025-02-04 09:26:35.528405 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:35.529298 | orchestrator | 2025-02-04 09:26:35.529419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:35.529885 | orchestrator | Tuesday 04 February 2025 09:26:35 +0000 (0:00:00.634) 0:00:59.401 ****** 2025-02-04 09:26:35.735322 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:35.735681 | orchestrator | 2025-02-04 09:26:35.736166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:35.737267 | orchestrator | Tuesday 04 February 2025 09:26:35 +0000 (0:00:00.205) 0:00:59.607 ****** 2025-02-04 09:26:35.950103 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:35.950759 | orchestrator | 2025-02-04 09:26:35.951901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:35.952861 | orchestrator | Tuesday 04 February 2025 09:26:35 +0000 (0:00:00.214) 0:00:59.821 ****** 2025-02-04 09:26:36.165114 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:36.165675 | orchestrator | 2025-02-04 09:26:36.166072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:36.167926 | orchestrator | Tuesday 04 February 2025 09:26:36 +0000 (0:00:00.215) 0:01:00.037 ****** 2025-02-04 09:26:36.363294 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:36.363739 | orchestrator | 2025-02-04 09:26:36.363797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:36.364237 | orchestrator | Tuesday 04 February 2025 09:26:36 +0000 (0:00:00.198) 0:01:00.236 ****** 2025-02-04 09:26:36.587959 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:36.588155 | orchestrator | 2025-02-04 09:26:36.588186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:36.588901 | orchestrator | Tuesday 04 February 2025 09:26:36 +0000 (0:00:00.224) 0:01:00.461 ****** 2025-02-04 09:26:36.805233 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:36.805985 | orchestrator | 2025-02-04 09:26:36.806633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:36.807540 | orchestrator | Tuesday 04 February 2025 09:26:36 +0000 (0:00:00.213) 0:01:00.675 ****** 2025-02-04 09:26:36.998254 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:36.998771 | orchestrator | 2025-02-04 09:26:36.998962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:36.999594 | orchestrator | Tuesday 04 February 2025 09:26:36 +0000 (0:00:00.195) 0:01:00.871 ****** 2025-02-04 09:26:37.850513 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-02-04 09:26:37.850742 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-02-04 09:26:37.850782 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-02-04 09:26:37.851795 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-02-04 09:26:37.851887 | orchestrator | 2025-02-04 09:26:37.851924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:37.852013 | orchestrator | Tuesday 04 February 2025 09:26:37 +0000 (0:00:00.850) 0:01:01.721 ****** 2025-02-04 09:26:38.076036 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:38.076335 | orchestrator | 2025-02-04 09:26:38.076846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:38.077516 | orchestrator | Tuesday 04 February 2025 09:26:38 +0000 (0:00:00.225) 0:01:01.947 ****** 2025-02-04 09:26:38.510239 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:38.510385 | orchestrator | 2025-02-04 09:26:38.511766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:38.512663 | orchestrator | Tuesday 04 February 2025 09:26:38 +0000 (0:00:00.435) 0:01:02.383 ****** 2025-02-04 09:26:38.708485 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:38.709143 | orchestrator | 2025-02-04 09:26:38.713114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-02-04 09:26:38.914996 | orchestrator | Tuesday 04 February 2025 09:26:38 +0000 (0:00:00.196) 0:01:02.579 ****** 2025-02-04 09:26:38.915133 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:38.915913 | orchestrator | 2025-02-04 09:26:38.917209 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-02-04 09:26:38.917540 | orchestrator | Tuesday 04 February 2025 09:26:38 +0000 (0:00:00.208) 0:01:02.788 ****** 2025-02-04 09:26:39.047336 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:39.048085 | orchestrator | 2025-02-04 09:26:39.049316 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-02-04 09:26:39.050172 | orchestrator | Tuesday 04 February 2025 09:26:39 +0000 (0:00:00.131) 0:01:02.920 ****** 2025-02-04 09:26:39.275248 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}}) 2025-02-04 09:26:39.275392 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '89dbb78a-6e2f-596a-9aad-74f54f8525ce'}}) 2025-02-04 09:26:39.275421 | orchestrator | 2025-02-04 09:26:39.276292 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-02-04 09:26:39.276861 | orchestrator | Tuesday 04 February 2025 09:26:39 +0000 (0:00:00.227) 0:01:03.147 ****** 2025-02-04 09:26:41.038425 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}) 2025-02-04 09:26:41.038619 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'}) 2025-02-04 09:26:41.040795 | orchestrator | 2025-02-04 09:26:41.234808 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-02-04 09:26:41.234969 | orchestrator | Tuesday 04 February 2025 09:26:41 +0000 (0:00:01.760) 0:01:04.908 ****** 2025-02-04 09:26:41.235053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:41.235220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:41.235583 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:41.236682 | orchestrator | 2025-02-04 09:26:41.237122 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-02-04 09:26:41.237457 | orchestrator | Tuesday 04 February 2025 09:26:41 +0000 (0:00:00.198) 0:01:05.107 ****** 2025-02-04 09:26:42.529676 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}) 2025-02-04 09:26:42.530263 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'}) 2025-02-04 09:26:42.530748 | orchestrator | 2025-02-04 09:26:42.531416 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-02-04 09:26:42.531969 | orchestrator | Tuesday 04 February 2025 09:26:42 +0000 (0:00:01.294) 0:01:06.402 ****** 2025-02-04 09:26:42.713346 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:42.713948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:42.714801 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:42.716020 | orchestrator | 2025-02-04 09:26:42.716788 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-02-04 09:26:42.717777 | orchestrator | Tuesday 04 February 2025 09:26:42 +0000 (0:00:00.184) 0:01:06.586 ****** 2025-02-04 09:26:42.847510 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:42.848919 | orchestrator | 2025-02-04 09:26:42.849722 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-02-04 09:26:42.852330 | orchestrator | Tuesday 04 February 2025 09:26:42 +0000 (0:00:00.133) 0:01:06.720 ****** 2025-02-04 09:26:43.195840 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:43.196433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:43.198945 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:43.199776 | orchestrator | 2025-02-04 09:26:43.199812 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-02-04 09:26:43.200443 | orchestrator | Tuesday 04 February 2025 09:26:43 +0000 (0:00:00.347) 0:01:07.067 ****** 2025-02-04 09:26:43.352537 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:43.353299 | orchestrator | 2025-02-04 09:26:43.353342 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-02-04 09:26:43.353411 | orchestrator | Tuesday 04 February 2025 09:26:43 +0000 (0:00:00.157) 0:01:07.225 ****** 2025-02-04 09:26:43.570682 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:43.571363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:43.571435 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:43.573060 | orchestrator | 2025-02-04 09:26:43.573866 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-02-04 09:26:43.574599 | orchestrator | Tuesday 04 February 2025 09:26:43 +0000 (0:00:00.218) 0:01:07.443 ****** 2025-02-04 09:26:43.722758 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:43.723196 | orchestrator | 2025-02-04 09:26:43.723802 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-02-04 09:26:43.724214 | orchestrator | Tuesday 04 February 2025 09:26:43 +0000 (0:00:00.152) 0:01:07.596 ****** 2025-02-04 09:26:43.922168 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:43.922351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:43.923069 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:43.925894 | orchestrator | 2025-02-04 09:26:44.078625 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-02-04 09:26:44.078795 | orchestrator | Tuesday 04 February 2025 09:26:43 +0000 (0:00:00.197) 0:01:07.793 ****** 2025-02-04 09:26:44.078830 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:44.080520 | orchestrator | 2025-02-04 09:26:44.080565 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-02-04 09:26:44.080591 | orchestrator | Tuesday 04 February 2025 09:26:44 +0000 (0:00:00.157) 0:01:07.950 ****** 2025-02-04 09:26:44.262188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:44.262567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:44.263242 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:44.264396 | orchestrator | 2025-02-04 09:26:44.264477 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-02-04 09:26:44.265564 | orchestrator | Tuesday 04 February 2025 09:26:44 +0000 (0:00:00.185) 0:01:08.135 ****** 2025-02-04 09:26:44.436535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:44.436645 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:44.436656 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:44.436687 | orchestrator | 2025-02-04 09:26:44.437170 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-02-04 09:26:44.437354 | orchestrator | Tuesday 04 February 2025 09:26:44 +0000 (0:00:00.174) 0:01:08.310 ****** 2025-02-04 09:26:44.608982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:44.610726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:44.611029 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:44.611058 | orchestrator | 2025-02-04 09:26:44.611078 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-02-04 09:26:44.611447 | orchestrator | Tuesday 04 February 2025 09:26:44 +0000 (0:00:00.172) 0:01:08.482 ****** 2025-02-04 09:26:44.737192 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:44.738710 | orchestrator | 2025-02-04 09:26:44.738751 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-02-04 09:26:44.738773 | orchestrator | Tuesday 04 February 2025 09:26:44 +0000 (0:00:00.125) 0:01:08.607 ****** 2025-02-04 09:26:44.874655 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:44.875688 | orchestrator | 2025-02-04 09:26:44.876768 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-02-04 09:26:44.877432 | orchestrator | Tuesday 04 February 2025 09:26:44 +0000 (0:00:00.140) 0:01:08.748 ****** 2025-02-04 09:26:45.253590 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:45.254464 | orchestrator | 2025-02-04 09:26:45.255373 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-02-04 09:26:45.256345 | orchestrator | Tuesday 04 February 2025 09:26:45 +0000 (0:00:00.376) 0:01:09.124 ****** 2025-02-04 09:26:45.400530 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:26:45.400890 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-02-04 09:26:45.402095 | orchestrator | } 2025-02-04 09:26:45.402264 | orchestrator | 2025-02-04 09:26:45.403229 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-02-04 09:26:45.404108 | orchestrator | Tuesday 04 February 2025 09:26:45 +0000 (0:00:00.148) 0:01:09.273 ****** 2025-02-04 09:26:45.552950 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:26:45.554007 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-02-04 09:26:45.554179 | orchestrator | } 2025-02-04 09:26:45.554262 | orchestrator | 2025-02-04 09:26:45.554646 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-02-04 09:26:45.555089 | orchestrator | Tuesday 04 February 2025 09:26:45 +0000 (0:00:00.152) 0:01:09.426 ****** 2025-02-04 09:26:45.701587 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:26:45.702199 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-02-04 09:26:45.702584 | orchestrator | } 2025-02-04 09:26:45.703346 | orchestrator | 2025-02-04 09:26:45.704141 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-02-04 09:26:45.704515 | orchestrator | Tuesday 04 February 2025 09:26:45 +0000 (0:00:00.147) 0:01:09.573 ****** 2025-02-04 09:26:46.244157 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:46.244872 | orchestrator | 2025-02-04 09:26:46.245028 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-02-04 09:26:46.245487 | orchestrator | Tuesday 04 February 2025 09:26:46 +0000 (0:00:00.541) 0:01:10.114 ****** 2025-02-04 09:26:46.800100 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:46.801208 | orchestrator | 2025-02-04 09:26:46.804052 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-02-04 09:26:47.328576 | orchestrator | Tuesday 04 February 2025 09:26:46 +0000 (0:00:00.557) 0:01:10.672 ****** 2025-02-04 09:26:47.328742 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:47.329115 | orchestrator | 2025-02-04 09:26:47.329590 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-02-04 09:26:47.330495 | orchestrator | Tuesday 04 February 2025 09:26:47 +0000 (0:00:00.527) 0:01:11.200 ****** 2025-02-04 09:26:47.502508 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:47.503820 | orchestrator | 2025-02-04 09:26:47.506472 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-02-04 09:26:47.622320 | orchestrator | Tuesday 04 February 2025 09:26:47 +0000 (0:00:00.174) 0:01:11.374 ****** 2025-02-04 09:26:47.622455 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:47.623187 | orchestrator | 2025-02-04 09:26:47.623646 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-02-04 09:26:47.624339 | orchestrator | Tuesday 04 February 2025 09:26:47 +0000 (0:00:00.120) 0:01:11.495 ****** 2025-02-04 09:26:47.722985 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:47.724376 | orchestrator | 2025-02-04 09:26:47.725352 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-02-04 09:26:47.726787 | orchestrator | Tuesday 04 February 2025 09:26:47 +0000 (0:00:00.100) 0:01:11.595 ****** 2025-02-04 09:26:47.865053 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:26:47.866381 | orchestrator |  "vgs_report": { 2025-02-04 09:26:47.867360 | orchestrator |  "vg": [] 2025-02-04 09:26:47.868818 | orchestrator |  } 2025-02-04 09:26:47.869749 | orchestrator | } 2025-02-04 09:26:47.870190 | orchestrator | 2025-02-04 09:26:47.870791 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-02-04 09:26:47.871500 | orchestrator | Tuesday 04 February 2025 09:26:47 +0000 (0:00:00.141) 0:01:11.737 ****** 2025-02-04 09:26:48.290458 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:48.290657 | orchestrator | 2025-02-04 09:26:48.290930 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-02-04 09:26:48.291948 | orchestrator | Tuesday 04 February 2025 09:26:48 +0000 (0:00:00.424) 0:01:12.161 ****** 2025-02-04 09:26:48.443738 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:48.444607 | orchestrator | 2025-02-04 09:26:48.444642 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-02-04 09:26:48.444849 | orchestrator | Tuesday 04 February 2025 09:26:48 +0000 (0:00:00.155) 0:01:12.316 ****** 2025-02-04 09:26:48.573031 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:48.573504 | orchestrator | 2025-02-04 09:26:48.573958 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-02-04 09:26:48.574772 | orchestrator | Tuesday 04 February 2025 09:26:48 +0000 (0:00:00.128) 0:01:12.445 ****** 2025-02-04 09:26:48.717645 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:48.718848 | orchestrator | 2025-02-04 09:26:48.719983 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-02-04 09:26:48.722317 | orchestrator | Tuesday 04 February 2025 09:26:48 +0000 (0:00:00.144) 0:01:12.589 ****** 2025-02-04 09:26:48.861623 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:48.861866 | orchestrator | 2025-02-04 09:26:48.862238 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-02-04 09:26:48.862863 | orchestrator | Tuesday 04 February 2025 09:26:48 +0000 (0:00:00.144) 0:01:12.734 ****** 2025-02-04 09:26:49.012408 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.013050 | orchestrator | 2025-02-04 09:26:49.013166 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-02-04 09:26:49.015138 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.148) 0:01:12.883 ****** 2025-02-04 09:26:49.183883 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.185189 | orchestrator | 2025-02-04 09:26:49.186389 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-02-04 09:26:49.187854 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.173) 0:01:13.056 ****** 2025-02-04 09:26:49.328444 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.329907 | orchestrator | 2025-02-04 09:26:49.329972 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-02-04 09:26:49.330000 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.135) 0:01:13.192 ****** 2025-02-04 09:26:49.463316 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.464479 | orchestrator | 2025-02-04 09:26:49.464513 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-02-04 09:26:49.464536 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.141) 0:01:13.333 ****** 2025-02-04 09:26:49.606477 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.607013 | orchestrator | 2025-02-04 09:26:49.607054 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-02-04 09:26:49.607338 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.145) 0:01:13.479 ****** 2025-02-04 09:26:49.770318 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.772569 | orchestrator | 2025-02-04 09:26:49.772979 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-02-04 09:26:49.774056 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.160) 0:01:13.640 ****** 2025-02-04 09:26:49.904199 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:49.904894 | orchestrator | 2025-02-04 09:26:49.905667 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-02-04 09:26:49.906123 | orchestrator | Tuesday 04 February 2025 09:26:49 +0000 (0:00:00.137) 0:01:13.777 ****** 2025-02-04 09:26:50.292304 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:50.292926 | orchestrator | 2025-02-04 09:26:50.293361 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-02-04 09:26:50.293982 | orchestrator | Tuesday 04 February 2025 09:26:50 +0000 (0:00:00.384) 0:01:14.162 ****** 2025-02-04 09:26:50.450918 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:50.451079 | orchestrator | 2025-02-04 09:26:50.451103 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-02-04 09:26:50.451124 | orchestrator | Tuesday 04 February 2025 09:26:50 +0000 (0:00:00.162) 0:01:14.324 ****** 2025-02-04 09:26:50.642251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:50.643031 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:50.643071 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:50.643937 | orchestrator | 2025-02-04 09:26:50.644521 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-02-04 09:26:50.645018 | orchestrator | Tuesday 04 February 2025 09:26:50 +0000 (0:00:00.190) 0:01:14.514 ****** 2025-02-04 09:26:50.817749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:50.817962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:50.817989 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:50.818014 | orchestrator | 2025-02-04 09:26:50.818331 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-02-04 09:26:50.818566 | orchestrator | Tuesday 04 February 2025 09:26:50 +0000 (0:00:00.175) 0:01:14.690 ****** 2025-02-04 09:26:50.975733 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:50.976518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:50.977814 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:50.980102 | orchestrator | 2025-02-04 09:26:51.144582 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-02-04 09:26:51.144765 | orchestrator | Tuesday 04 February 2025 09:26:50 +0000 (0:00:00.157) 0:01:14.848 ****** 2025-02-04 09:26:51.144805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:51.146066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:51.146445 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:51.147218 | orchestrator | 2025-02-04 09:26:51.147903 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-02-04 09:26:51.149028 | orchestrator | Tuesday 04 February 2025 09:26:51 +0000 (0:00:00.168) 0:01:15.016 ****** 2025-02-04 09:26:51.336653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:51.336932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:51.337746 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:51.338125 | orchestrator | 2025-02-04 09:26:51.339131 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-02-04 09:26:51.340146 | orchestrator | Tuesday 04 February 2025 09:26:51 +0000 (0:00:00.189) 0:01:15.206 ****** 2025-02-04 09:26:51.533133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:51.533308 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:51.533335 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:51.533436 | orchestrator | 2025-02-04 09:26:51.534090 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-02-04 09:26:51.534494 | orchestrator | Tuesday 04 February 2025 09:26:51 +0000 (0:00:00.199) 0:01:15.406 ****** 2025-02-04 09:26:51.723499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:51.723685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:51.723802 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:51.724204 | orchestrator | 2025-02-04 09:26:51.725026 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-02-04 09:26:51.725976 | orchestrator | Tuesday 04 February 2025 09:26:51 +0000 (0:00:00.190) 0:01:15.596 ****** 2025-02-04 09:26:51.889240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:51.889477 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:51.890526 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:51.891247 | orchestrator | 2025-02-04 09:26:51.894312 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-02-04 09:26:51.894498 | orchestrator | Tuesday 04 February 2025 09:26:51 +0000 (0:00:00.165) 0:01:15.761 ****** 2025-02-04 09:26:52.382948 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:52.383187 | orchestrator | 2025-02-04 09:26:52.383887 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-02-04 09:26:52.384216 | orchestrator | Tuesday 04 February 2025 09:26:52 +0000 (0:00:00.492) 0:01:16.254 ****** 2025-02-04 09:26:53.108649 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:53.109326 | orchestrator | 2025-02-04 09:26:53.109438 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-02-04 09:26:53.109491 | orchestrator | Tuesday 04 February 2025 09:26:53 +0000 (0:00:00.727) 0:01:16.982 ****** 2025-02-04 09:26:53.257293 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:26:53.257505 | orchestrator | 2025-02-04 09:26:53.257543 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-02-04 09:26:53.257765 | orchestrator | Tuesday 04 February 2025 09:26:53 +0000 (0:00:00.147) 0:01:17.129 ****** 2025-02-04 09:26:53.463725 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'vg_name': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}) 2025-02-04 09:26:53.463919 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'vg_name': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'}) 2025-02-04 09:26:53.463964 | orchestrator | 2025-02-04 09:26:53.464020 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-02-04 09:26:53.464248 | orchestrator | Tuesday 04 February 2025 09:26:53 +0000 (0:00:00.203) 0:01:17.332 ****** 2025-02-04 09:26:53.647242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:53.831070 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:53.831216 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:53.831236 | orchestrator | 2025-02-04 09:26:53.831253 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-02-04 09:26:53.831269 | orchestrator | Tuesday 04 February 2025 09:26:53 +0000 (0:00:00.184) 0:01:17.517 ****** 2025-02-04 09:26:53.831300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:53.831447 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:53.831475 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:53.831634 | orchestrator | 2025-02-04 09:26:53.832108 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-02-04 09:26:53.833397 | orchestrator | Tuesday 04 February 2025 09:26:53 +0000 (0:00:00.186) 0:01:17.704 ****** 2025-02-04 09:26:54.011912 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'})  2025-02-04 09:26:54.015148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'})  2025-02-04 09:26:54.015879 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:26:54.015927 | orchestrator | 2025-02-04 09:26:54.016048 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-02-04 09:26:54.016873 | orchestrator | Tuesday 04 February 2025 09:26:54 +0000 (0:00:00.179) 0:01:17.883 ****** 2025-02-04 09:26:54.463326 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:26:54.464052 | orchestrator |  "lvm_report": { 2025-02-04 09:26:54.464809 | orchestrator |  "lv": [ 2025-02-04 09:26:54.466249 | orchestrator |  { 2025-02-04 09:26:54.466489 | orchestrator |  "lv_name": "osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39", 2025-02-04 09:26:54.467793 | orchestrator |  "vg_name": "ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39" 2025-02-04 09:26:54.468036 | orchestrator |  }, 2025-02-04 09:26:54.469193 | orchestrator |  { 2025-02-04 09:26:54.469603 | orchestrator |  "lv_name": "osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce", 2025-02-04 09:26:54.470126 | orchestrator |  "vg_name": "ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce" 2025-02-04 09:26:54.470342 | orchestrator |  } 2025-02-04 09:26:54.471078 | orchestrator |  ], 2025-02-04 09:26:54.472098 | orchestrator |  "pv": [ 2025-02-04 09:26:54.472319 | orchestrator |  { 2025-02-04 09:26:54.472633 | orchestrator |  "pv_name": "/dev/sdb", 2025-02-04 09:26:54.473258 | orchestrator |  "vg_name": "ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39" 2025-02-04 09:26:54.473878 | orchestrator |  }, 2025-02-04 09:26:54.474290 | orchestrator |  { 2025-02-04 09:26:54.474560 | orchestrator |  "pv_name": "/dev/sdc", 2025-02-04 09:26:54.475777 | orchestrator |  "vg_name": "ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce" 2025-02-04 09:26:54.476259 | orchestrator |  } 2025-02-04 09:26:54.476645 | orchestrator |  ] 2025-02-04 09:26:54.476846 | orchestrator |  } 2025-02-04 09:26:54.476872 | orchestrator | } 2025-02-04 09:26:54.478345 | orchestrator | 2025-02-04 09:26:54.478842 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:26:54.479691 | orchestrator | 2025-02-04 09:26:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:26:54.480142 | orchestrator | 2025-02-04 09:26:54 | INFO  | Please wait and do not abort execution. 2025-02-04 09:26:54.480221 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-04 09:26:54.481218 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-04 09:26:54.483328 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-02-04 09:26:54.483505 | orchestrator | 2025-02-04 09:26:54.484654 | orchestrator | 2025-02-04 09:26:54.485468 | orchestrator | 2025-02-04 09:26:54.485810 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:26:54.487461 | orchestrator | Tuesday 04 February 2025 09:26:54 +0000 (0:00:00.449) 0:01:18.333 ****** 2025-02-04 09:26:54.488012 | orchestrator | =============================================================================== 2025-02-04 09:26:54.488202 | orchestrator | Create block VGs -------------------------------------------------------- 5.53s 2025-02-04 09:26:54.489140 | orchestrator | Create block LVs -------------------------------------------------------- 4.04s 2025-02-04 09:26:54.489888 | orchestrator | Print LVM report data --------------------------------------------------- 2.13s 2025-02-04 09:26:54.490781 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.99s 2025-02-04 09:26:54.491138 | orchestrator | Add known links to the list of available block devices ------------------ 1.81s 2025-02-04 09:26:54.491910 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.76s 2025-02-04 09:26:54.492091 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.57s 2025-02-04 09:26:54.492812 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-02-04 09:26:54.493165 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2025-02-04 09:26:54.493578 | orchestrator | Add known partitions to the list of available block devices ------------- 1.52s 2025-02-04 09:26:54.494224 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.20s 2025-02-04 09:26:54.494552 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.89s 2025-02-04 09:26:54.494840 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-02-04 09:26:54.495180 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2025-02-04 09:26:54.495593 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.75s 2025-02-04 09:26:54.500095 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2025-02-04 09:26:54.502439 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2025-02-04 09:26:54.502482 | orchestrator | Print LVM VG sizes ------------------------------------------------------ 0.71s 2025-02-04 09:26:54.503189 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.70s 2025-02-04 09:26:54.503690 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2025-02-04 09:26:56.740882 | orchestrator | 2025-02-04 09:26:56 | INFO  | Task 12517ffc-68dc-4e5d-802f-85312e871497 (facts) was prepared for execution. 2025-02-04 09:27:00.112930 | orchestrator | 2025-02-04 09:26:56 | INFO  | It takes a moment until task 12517ffc-68dc-4e5d-802f-85312e871497 (facts) has been started and output is visible here. 2025-02-04 09:27:00.113075 | orchestrator | 2025-02-04 09:27:00.115548 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-02-04 09:27:00.118355 | orchestrator | 2025-02-04 09:27:00.120456 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-02-04 09:27:00.122592 | orchestrator | Tuesday 04 February 2025 09:27:00 +0000 (0:00:00.221) 0:00:00.221 ****** 2025-02-04 09:27:01.240355 | orchestrator | ok: [testbed-manager] 2025-02-04 09:27:01.241020 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:27:01.242419 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:27:01.242859 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:27:01.245015 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:27:01.246013 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:27:01.247311 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:27:01.248156 | orchestrator | 2025-02-04 09:27:01.248882 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-02-04 09:27:01.249688 | orchestrator | Tuesday 04 February 2025 09:27:01 +0000 (0:00:01.126) 0:00:01.347 ****** 2025-02-04 09:27:01.401753 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:27:01.484619 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:27:01.567130 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:27:01.647545 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:27:01.721771 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:27:02.482530 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:27:02.482733 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:27:02.483746 | orchestrator | 2025-02-04 09:27:02.484829 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-02-04 09:27:02.485844 | orchestrator | 2025-02-04 09:27:02.486842 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-02-04 09:27:02.488473 | orchestrator | Tuesday 04 February 2025 09:27:02 +0000 (0:00:01.247) 0:00:02.595 ****** 2025-02-04 09:27:07.486674 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:27:07.487734 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:27:07.491042 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:27:07.491777 | orchestrator | ok: [testbed-manager] 2025-02-04 09:27:07.492759 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:27:07.494086 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:27:07.494736 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:27:07.498309 | orchestrator | 2025-02-04 09:27:07.499723 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-02-04 09:27:07.500925 | orchestrator | 2025-02-04 09:27:07.501650 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-02-04 09:27:07.501853 | orchestrator | Tuesday 04 February 2025 09:27:07 +0000 (0:00:05.003) 0:00:07.598 ****** 2025-02-04 09:27:07.650096 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:27:07.734153 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:27:07.836150 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:27:07.925393 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:27:08.027271 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:27:08.072515 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:27:08.072732 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:27:08.073696 | orchestrator | 2025-02-04 09:27:08.073913 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:27:08.074203 | orchestrator | 2025-02-04 09:27:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-02-04 09:27:08.074775 | orchestrator | 2025-02-04 09:27:08 | INFO  | Please wait and do not abort execution. 2025-02-04 09:27:08.074814 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.075337 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.075423 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.076063 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.076508 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.076850 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.077173 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:27:08.077416 | orchestrator | 2025-02-04 09:27:08.077621 | orchestrator | 2025-02-04 09:27:08.077881 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:27:08.078081 | orchestrator | Tuesday 04 February 2025 09:27:08 +0000 (0:00:00.587) 0:00:08.186 ****** 2025-02-04 09:27:08.078217 | orchestrator | =============================================================================== 2025-02-04 09:27:08.078511 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.00s 2025-02-04 09:27:08.078870 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-02-04 09:27:08.079040 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-02-04 09:27:08.079160 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-02-04 09:27:08.707650 | orchestrator | 2025-02-04 09:27:08.711923 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Feb 4 09:27:08 UTC 2025 2025-02-04 09:27:10.137536 | orchestrator | 2025-02-04 09:27:10.137667 | orchestrator | 2025-02-04 09:27:10 | INFO  | Collection nutshell is prepared for execution 2025-02-04 09:27:10.143815 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [0] - dotfiles 2025-02-04 09:27:10.143992 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [0] - homer 2025-02-04 09:27:10.144104 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [0] - netdata 2025-02-04 09:27:10.144121 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [0] - openstackclient 2025-02-04 09:27:10.144131 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [0] - phpmyadmin 2025-02-04 09:27:10.144141 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [0] - common 2025-02-04 09:27:10.144151 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [1] -- loadbalancer 2025-02-04 09:27:10.144178 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [2] --- opensearch 2025-02-04 09:27:10.144189 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [2] --- mariadb-ng 2025-02-04 09:27:10.144199 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [3] ---- horizon 2025-02-04 09:27:10.144208 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [3] ---- keystone 2025-02-04 09:27:10.144219 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [4] ----- neutron 2025-02-04 09:27:10.144229 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [5] ------ wait-for-nova 2025-02-04 09:27:10.144277 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [5] ------ octavia 2025-02-04 09:27:10.144293 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [4] ----- barbican 2025-02-04 09:27:10.144386 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [4] ----- designate 2025-02-04 09:27:10.144401 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [4] ----- ironic 2025-02-04 09:27:10.144410 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [4] ----- placement 2025-02-04 09:27:10.144420 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [4] ----- magnum 2025-02-04 09:27:10.144433 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [1] -- openvswitch 2025-02-04 09:27:10.144498 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [2] --- ovn 2025-02-04 09:27:10.144580 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [1] -- memcached 2025-02-04 09:27:10.144620 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [1] -- redis 2025-02-04 09:27:10.144634 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [1] -- rabbitmq-ng 2025-02-04 09:27:10.144819 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [0] - kubernetes 2025-02-04 09:27:10.145037 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [1] -- kubeconfig 2025-02-04 09:27:10.146671 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [1] -- copy-kubeconfig 2025-02-04 09:27:10.146730 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [0] - ceph 2025-02-04 09:27:10.146790 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [1] -- ceph-pools 2025-02-04 09:27:10.146844 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [2] --- copy-ceph-keys 2025-02-04 09:27:10.146855 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [3] ---- cephclient 2025-02-04 09:27:10.146864 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-02-04 09:27:10.146873 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [4] ----- wait-for-keystone 2025-02-04 09:27:10.146881 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [5] ------ kolla-ceph-rgw 2025-02-04 09:27:10.146890 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [5] ------ glance 2025-02-04 09:27:10.146901 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [5] ------ cinder 2025-02-04 09:27:10.146959 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [5] ------ nova 2025-02-04 09:27:10.146972 | orchestrator | 2025-02-04 09:27:10 | INFO  | A [4] ----- prometheus 2025-02-04 09:27:10.146985 | orchestrator | 2025-02-04 09:27:10 | INFO  | D [5] ------ grafana 2025-02-04 09:27:10.266482 | orchestrator | 2025-02-04 09:27:10 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-02-04 09:27:12.097345 | orchestrator | 2025-02-04 09:27:10 | INFO  | Tasks are running in the background 2025-02-04 09:27:12.097488 | orchestrator | 2025-02-04 09:27:12 | INFO  | No task IDs specified, wait for all currently running tasks 2025-02-04 09:27:14.204465 | orchestrator | 2025-02-04 09:27:14 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:14.208975 | orchestrator | 2025-02-04 09:27:14 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:14.210185 | orchestrator | 2025-02-04 09:27:14 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:14.210230 | orchestrator | 2025-02-04 09:27:14 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:14.210255 | orchestrator | 2025-02-04 09:27:14 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:14.212291 | orchestrator | 2025-02-04 09:27:14 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:17.266922 | orchestrator | 2025-02-04 09:27:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:17.267043 | orchestrator | 2025-02-04 09:27:17 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:17.268878 | orchestrator | 2025-02-04 09:27:17 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:17.275519 | orchestrator | 2025-02-04 09:27:17 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:17.278499 | orchestrator | 2025-02-04 09:27:17 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:17.280741 | orchestrator | 2025-02-04 09:27:17 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:17.281140 | orchestrator | 2025-02-04 09:27:17 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:20.370954 | orchestrator | 2025-02-04 09:27:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:20.371072 | orchestrator | 2025-02-04 09:27:20 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:20.373969 | orchestrator | 2025-02-04 09:27:20 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:20.374089 | orchestrator | 2025-02-04 09:27:20 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:20.374134 | orchestrator | 2025-02-04 09:27:20 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:20.374199 | orchestrator | 2025-02-04 09:27:20 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:20.374293 | orchestrator | 2025-02-04 09:27:20 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:23.436934 | orchestrator | 2025-02-04 09:27:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:23.437070 | orchestrator | 2025-02-04 09:27:23 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:23.439631 | orchestrator | 2025-02-04 09:27:23 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:23.440989 | orchestrator | 2025-02-04 09:27:23 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:23.441025 | orchestrator | 2025-02-04 09:27:23 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:23.442733 | orchestrator | 2025-02-04 09:27:23 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:23.443847 | orchestrator | 2025-02-04 09:27:23 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:26.498310 | orchestrator | 2025-02-04 09:27:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:26.498421 | orchestrator | 2025-02-04 09:27:26 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:26.498754 | orchestrator | 2025-02-04 09:27:26 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:26.499768 | orchestrator | 2025-02-04 09:27:26 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:26.500761 | orchestrator | 2025-02-04 09:27:26 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:26.501684 | orchestrator | 2025-02-04 09:27:26 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:26.503140 | orchestrator | 2025-02-04 09:27:26 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:26.503576 | orchestrator | 2025-02-04 09:27:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:29.580412 | orchestrator | 2025-02-04 09:27:29 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:29.580586 | orchestrator | 2025-02-04 09:27:29 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:29.581754 | orchestrator | 2025-02-04 09:27:29 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:29.582012 | orchestrator | 2025-02-04 09:27:29 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:29.582891 | orchestrator | 2025-02-04 09:27:29 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:29.584035 | orchestrator | 2025-02-04 09:27:29 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:29.584101 | orchestrator | 2025-02-04 09:27:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:32.624255 | orchestrator | 2025-02-04 09:27:32 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:32.624480 | orchestrator | 2025-02-04 09:27:32 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:32.625196 | orchestrator | 2025-02-04 09:27:32 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:32.627268 | orchestrator | 2025-02-04 09:27:32 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:32.627547 | orchestrator | 2025-02-04 09:27:32 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state STARTED 2025-02-04 09:27:32.628182 | orchestrator | 2025-02-04 09:27:32 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:32.628297 | orchestrator | 2025-02-04 09:27:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:35.695552 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:35.702003 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:35.710485 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:35.717701 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:35.722540 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:35.730395 | orchestrator | 2025-02-04 09:27:35.730463 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-02-04 09:27:35.730481 | orchestrator | 2025-02-04 09:27:35.730496 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-02-04 09:27:35.730511 | orchestrator | Tuesday 04 February 2025 09:27:20 +0000 (0:00:00.585) 0:00:00.585 ****** 2025-02-04 09:27:35.730526 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:27:35.730541 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:27:35.730556 | orchestrator | changed: [testbed-manager] 2025-02-04 09:27:35.730570 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:27:35.730585 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:27:35.730599 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:27:35.730613 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:27:35.730628 | orchestrator | 2025-02-04 09:27:35.730642 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-02-04 09:27:35.730656 | orchestrator | Tuesday 04 February 2025 09:27:24 +0000 (0:00:03.972) 0:00:04.558 ****** 2025-02-04 09:27:35.730672 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-04 09:27:35.730694 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-04 09:27:35.730753 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-04 09:27:35.730772 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-04 09:27:35.730786 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-04 09:27:35.730800 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-04 09:27:35.730815 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-04 09:27:35.730829 | orchestrator | 2025-02-04 09:27:35.730843 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-02-04 09:27:35.730858 | orchestrator | Tuesday 04 February 2025 09:27:26 +0000 (0:00:02.321) 0:00:06.879 ****** 2025-02-04 09:27:35.730875 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:25.186111', 'end': '2025-02-04 09:27:25.192293', 'delta': '0:00:00.006182', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.730933 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:25.351535', 'end': '2025-02-04 09:27:25.359842', 'delta': '0:00:00.008307', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.730950 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:25.112771', 'end': '2025-02-04 09:27:25.128929', 'delta': '0:00:00.016158', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.730993 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:25.974220', 'end': '2025-02-04 09:27:25.982652', 'delta': '0:00:00.008432', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.731009 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:26.237658', 'end': '2025-02-04 09:27:26.245017', 'delta': '0:00:00.007359', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.731025 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:26.404405', 'end': '2025-02-04 09:27:26.408299', 'delta': '0:00:00.003894', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.731054 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-02-04 09:27:26.598781', 'end': '2025-02-04 09:27:26.605811', 'delta': '0:00:00.007030', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-02-04 09:27:35.731069 | orchestrator | 2025-02-04 09:27:35.731084 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-02-04 09:27:35.731099 | orchestrator | Tuesday 04 February 2025 09:27:28 +0000 (0:00:01.597) 0:00:08.477 ****** 2025-02-04 09:27:35.731113 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-02-04 09:27:35.731127 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-02-04 09:27:35.731141 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-02-04 09:27:35.731155 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-02-04 09:27:35.731170 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-02-04 09:27:35.731187 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-02-04 09:27:35.731213 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-02-04 09:27:35.731237 | orchestrator | 2025-02-04 09:27:35.731262 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-02-04 09:27:35.731286 | orchestrator | Tuesday 04 February 2025 09:27:30 +0000 (0:00:01.786) 0:00:10.263 ****** 2025-02-04 09:27:35.731310 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-02-04 09:27:35.731335 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-02-04 09:27:35.731361 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-02-04 09:27:35.731386 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-02-04 09:27:35.731408 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-02-04 09:27:35.731423 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-02-04 09:27:35.731437 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-02-04 09:27:35.731451 | orchestrator | 2025-02-04 09:27:35.731465 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:27:35.731488 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731533 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731549 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731563 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731578 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731592 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731606 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:27:35.731635 | orchestrator | 2025-02-04 09:27:35.731650 | orchestrator | 2025-02-04 09:27:35.731665 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:27:35.731679 | orchestrator | Tuesday 04 February 2025 09:27:32 +0000 (0:00:02.837) 0:00:13.101 ****** 2025-02-04 09:27:35.731694 | orchestrator | =============================================================================== 2025-02-04 09:27:35.731732 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.97s 2025-02-04 09:27:35.731753 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.84s 2025-02-04 09:27:35.731767 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.32s 2025-02-04 09:27:35.731781 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.79s 2025-02-04 09:27:35.731795 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.60s 2025-02-04 09:27:35.731814 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task 0e68b04b-faba-4a85-a778-cf056b5233cd is in state SUCCESS 2025-02-04 09:27:35.734495 | orchestrator | 2025-02-04 09:27:35 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:38.778270 | orchestrator | 2025-02-04 09:27:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:38.778411 | orchestrator | 2025-02-04 09:27:38 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:38.778646 | orchestrator | 2025-02-04 09:27:38 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:38.780455 | orchestrator | 2025-02-04 09:27:38 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:38.782996 | orchestrator | 2025-02-04 09:27:38 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:38.783033 | orchestrator | 2025-02-04 09:27:38 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:38.784058 | orchestrator | 2025-02-04 09:27:38 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:41.834436 | orchestrator | 2025-02-04 09:27:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:41.834555 | orchestrator | 2025-02-04 09:27:41 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:41.836458 | orchestrator | 2025-02-04 09:27:41 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:41.838120 | orchestrator | 2025-02-04 09:27:41 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:41.840777 | orchestrator | 2025-02-04 09:27:41 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:41.844483 | orchestrator | 2025-02-04 09:27:41 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:41.847643 | orchestrator | 2025-02-04 09:27:41 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:41.848431 | orchestrator | 2025-02-04 09:27:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:44.917269 | orchestrator | 2025-02-04 09:27:44 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:44.920256 | orchestrator | 2025-02-04 09:27:44 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:44.924106 | orchestrator | 2025-02-04 09:27:44 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:44.925667 | orchestrator | 2025-02-04 09:27:44 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:44.928334 | orchestrator | 2025-02-04 09:27:44 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:44.932085 | orchestrator | 2025-02-04 09:27:44 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:47.988087 | orchestrator | 2025-02-04 09:27:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:47.988240 | orchestrator | 2025-02-04 09:27:47 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:47.989187 | orchestrator | 2025-02-04 09:27:47 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:47.992130 | orchestrator | 2025-02-04 09:27:47 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:47.996562 | orchestrator | 2025-02-04 09:27:47 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:51.056246 | orchestrator | 2025-02-04 09:27:47 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:51.056335 | orchestrator | 2025-02-04 09:27:47 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:51.056348 | orchestrator | 2025-02-04 09:27:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:51.056371 | orchestrator | 2025-02-04 09:27:51 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:51.058129 | orchestrator | 2025-02-04 09:27:51 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:51.061015 | orchestrator | 2025-02-04 09:27:51 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:51.061075 | orchestrator | 2025-02-04 09:27:51 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:51.061094 | orchestrator | 2025-02-04 09:27:51 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:51.061119 | orchestrator | 2025-02-04 09:27:51 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:54.185357 | orchestrator | 2025-02-04 09:27:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:54.185488 | orchestrator | 2025-02-04 09:27:54 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:54.190287 | orchestrator | 2025-02-04 09:27:54 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:54.193591 | orchestrator | 2025-02-04 09:27:54 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state STARTED 2025-02-04 09:27:54.194115 | orchestrator | 2025-02-04 09:27:54 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:54.194158 | orchestrator | 2025-02-04 09:27:54 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:54.194880 | orchestrator | 2025-02-04 09:27:54 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:27:57.263067 | orchestrator | 2025-02-04 09:27:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:27:57.263230 | orchestrator | 2025-02-04 09:27:57 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:27:57.269855 | orchestrator | 2025-02-04 09:27:57 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:27:57.272785 | orchestrator | 2025-02-04 09:27:57 | INFO  | Task a7df5328-a962-44e2-be15-661fa5b61ce1 is in state SUCCESS 2025-02-04 09:27:57.272846 | orchestrator | 2025-02-04 09:27:57 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:27:57.273851 | orchestrator | 2025-02-04 09:27:57 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:27:57.276660 | orchestrator | 2025-02-04 09:27:57 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:00.320223 | orchestrator | 2025-02-04 09:27:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:00.320371 | orchestrator | 2025-02-04 09:28:00 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:00.322186 | orchestrator | 2025-02-04 09:28:00 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:00.322292 | orchestrator | 2025-02-04 09:28:00 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:00.322384 | orchestrator | 2025-02-04 09:28:00 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:00.322404 | orchestrator | 2025-02-04 09:28:00 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:00.322422 | orchestrator | 2025-02-04 09:28:00 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:03.386420 | orchestrator | 2025-02-04 09:28:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:03.386576 | orchestrator | 2025-02-04 09:28:03 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:03.388255 | orchestrator | 2025-02-04 09:28:03 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:03.389018 | orchestrator | 2025-02-04 09:28:03 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:03.391902 | orchestrator | 2025-02-04 09:28:03 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:03.391962 | orchestrator | 2025-02-04 09:28:03 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:03.392779 | orchestrator | 2025-02-04 09:28:03 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:06.502941 | orchestrator | 2025-02-04 09:28:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:06.503041 | orchestrator | 2025-02-04 09:28:06 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:06.503433 | orchestrator | 2025-02-04 09:28:06 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:06.504465 | orchestrator | 2025-02-04 09:28:06 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:06.505424 | orchestrator | 2025-02-04 09:28:06 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:06.505575 | orchestrator | 2025-02-04 09:28:06 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:06.506749 | orchestrator | 2025-02-04 09:28:06 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:09.561640 | orchestrator | 2025-02-04 09:28:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:09.561786 | orchestrator | 2025-02-04 09:28:09 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:09.566337 | orchestrator | 2025-02-04 09:28:09 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:09.567715 | orchestrator | 2025-02-04 09:28:09 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:09.568878 | orchestrator | 2025-02-04 09:28:09 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:09.569941 | orchestrator | 2025-02-04 09:28:09 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:12.628518 | orchestrator | 2025-02-04 09:28:09 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:12.628778 | orchestrator | 2025-02-04 09:28:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:12.628845 | orchestrator | 2025-02-04 09:28:12 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:12.628969 | orchestrator | 2025-02-04 09:28:12 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:12.629007 | orchestrator | 2025-02-04 09:28:12 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:12.629861 | orchestrator | 2025-02-04 09:28:12 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:12.635019 | orchestrator | 2025-02-04 09:28:12 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:12.636203 | orchestrator | 2025-02-04 09:28:12 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:12.637352 | orchestrator | 2025-02-04 09:28:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:15.687201 | orchestrator | 2025-02-04 09:28:15 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:15.690547 | orchestrator | 2025-02-04 09:28:15 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:15.691408 | orchestrator | 2025-02-04 09:28:15 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:15.695375 | orchestrator | 2025-02-04 09:28:15 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:15.696151 | orchestrator | 2025-02-04 09:28:15 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state STARTED 2025-02-04 09:28:15.700767 | orchestrator | 2025-02-04 09:28:15 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:18.750646 | orchestrator | 2025-02-04 09:28:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:18.750822 | orchestrator | 2025-02-04 09:28:18 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:18.750953 | orchestrator | 2025-02-04 09:28:18 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:18.751052 | orchestrator | 2025-02-04 09:28:18 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:18.751503 | orchestrator | 2025-02-04 09:28:18 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:18.753387 | orchestrator | 2025-02-04 09:28:18 | INFO  | Task 0a0adcb5-1a8c-4262-844e-6329a18170f9 is in state SUCCESS 2025-02-04 09:28:18.755778 | orchestrator | 2025-02-04 09:28:18 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:18.755827 | orchestrator | 2025-02-04 09:28:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:21.793095 | orchestrator | 2025-02-04 09:28:21 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:21.793297 | orchestrator | 2025-02-04 09:28:21 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:21.793322 | orchestrator | 2025-02-04 09:28:21 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:21.793342 | orchestrator | 2025-02-04 09:28:21 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:21.794175 | orchestrator | 2025-02-04 09:28:21 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:24.866800 | orchestrator | 2025-02-04 09:28:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:24.866923 | orchestrator | 2025-02-04 09:28:24 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:24.867501 | orchestrator | 2025-02-04 09:28:24 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:24.871598 | orchestrator | 2025-02-04 09:28:24 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state STARTED 2025-02-04 09:28:24.871792 | orchestrator | 2025-02-04 09:28:24 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:24.871837 | orchestrator | 2025-02-04 09:28:24 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:24.871927 | orchestrator | 2025-02-04 09:28:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:27.953618 | orchestrator | 2025-02-04 09:28:27 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:27.954207 | orchestrator | 2025-02-04 09:28:27 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:27.956151 | orchestrator | 2025-02-04 09:28:27 | INFO  | Task 66b5d27e-87ad-4655-9bb1-4181507cb757 is in state SUCCESS 2025-02-04 09:28:27.957726 | orchestrator | 2025-02-04 09:28:27.957822 | orchestrator | 2025-02-04 09:28:27.957838 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-02-04 09:28:27.957861 | orchestrator | 2025-02-04 09:28:27.957876 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-02-04 09:28:27.957890 | orchestrator | Tuesday 04 February 2025 09:27:19 +0000 (0:00:00.883) 0:00:00.883 ****** 2025-02-04 09:28:27.957905 | orchestrator | ok: [testbed-manager] => { 2025-02-04 09:28:27.957922 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-02-04 09:28:27.957939 | orchestrator | } 2025-02-04 09:28:27.957953 | orchestrator | 2025-02-04 09:28:27.957968 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-02-04 09:28:27.957982 | orchestrator | Tuesday 04 February 2025 09:27:19 +0000 (0:00:00.490) 0:00:01.374 ****** 2025-02-04 09:28:27.957997 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.958012 | orchestrator | 2025-02-04 09:28:27.958101 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-02-04 09:28:27.958116 | orchestrator | Tuesday 04 February 2025 09:27:20 +0000 (0:00:01.221) 0:00:02.595 ****** 2025-02-04 09:28:27.958130 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-02-04 09:28:27.958145 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-02-04 09:28:27.958159 | orchestrator | 2025-02-04 09:28:27.958173 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-02-04 09:28:27.958187 | orchestrator | Tuesday 04 February 2025 09:27:22 +0000 (0:00:01.378) 0:00:03.974 ****** 2025-02-04 09:28:27.958202 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.958216 | orchestrator | 2025-02-04 09:28:27.958230 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-02-04 09:28:27.958245 | orchestrator | Tuesday 04 February 2025 09:27:24 +0000 (0:00:02.094) 0:00:06.068 ****** 2025-02-04 09:28:27.958259 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.958273 | orchestrator | 2025-02-04 09:28:27.958288 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-02-04 09:28:27.958305 | orchestrator | Tuesday 04 February 2025 09:27:26 +0000 (0:00:02.215) 0:00:08.283 ****** 2025-02-04 09:28:27.958322 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-02-04 09:28:27.958338 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.958354 | orchestrator | 2025-02-04 09:28:27.958396 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-02-04 09:28:27.958413 | orchestrator | Tuesday 04 February 2025 09:27:52 +0000 (0:00:25.906) 0:00:34.189 ****** 2025-02-04 09:28:27.958430 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.958446 | orchestrator | 2025-02-04 09:28:27.958462 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:28:27.958478 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.958496 | orchestrator | 2025-02-04 09:28:27.958511 | orchestrator | 2025-02-04 09:28:27.958527 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:28:27.958543 | orchestrator | Tuesday 04 February 2025 09:27:56 +0000 (0:00:04.061) 0:00:38.251 ****** 2025-02-04 09:28:27.958559 | orchestrator | =============================================================================== 2025-02-04 09:28:27.958576 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.91s 2025-02-04 09:28:27.958591 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 4.06s 2025-02-04 09:28:27.958607 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.22s 2025-02-04 09:28:27.958624 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.09s 2025-02-04 09:28:27.958641 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.38s 2025-02-04 09:28:27.958658 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.22s 2025-02-04 09:28:27.958674 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.49s 2025-02-04 09:28:27.958689 | orchestrator | 2025-02-04 09:28:27.958702 | orchestrator | 2025-02-04 09:28:27.958717 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-02-04 09:28:27.958749 | orchestrator | 2025-02-04 09:28:27.958764 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-02-04 09:28:27.958807 | orchestrator | Tuesday 04 February 2025 09:27:19 +0000 (0:00:00.288) 0:00:00.289 ****** 2025-02-04 09:28:27.958821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-02-04 09:28:27.958836 | orchestrator | 2025-02-04 09:28:27.958851 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-02-04 09:28:27.958865 | orchestrator | Tuesday 04 February 2025 09:27:20 +0000 (0:00:00.295) 0:00:00.584 ****** 2025-02-04 09:28:27.958879 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-02-04 09:28:27.958894 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-02-04 09:28:27.958908 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-02-04 09:28:27.958922 | orchestrator | 2025-02-04 09:28:27.958936 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-02-04 09:28:27.958951 | orchestrator | Tuesday 04 February 2025 09:27:21 +0000 (0:00:01.779) 0:00:02.364 ****** 2025-02-04 09:28:27.958965 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.958979 | orchestrator | 2025-02-04 09:28:27.958993 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-02-04 09:28:27.959008 | orchestrator | Tuesday 04 February 2025 09:27:23 +0000 (0:00:01.613) 0:00:03.977 ****** 2025-02-04 09:28:27.959034 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-02-04 09:28:27.959049 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.959088 | orchestrator | 2025-02-04 09:28:27.959104 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-02-04 09:28:27.959119 | orchestrator | Tuesday 04 February 2025 09:28:04 +0000 (0:00:40.588) 0:00:44.566 ****** 2025-02-04 09:28:27.959133 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.959148 | orchestrator | 2025-02-04 09:28:27.959162 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-02-04 09:28:27.959184 | orchestrator | Tuesday 04 February 2025 09:28:06 +0000 (0:00:01.873) 0:00:46.439 ****** 2025-02-04 09:28:27.959199 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.959213 | orchestrator | 2025-02-04 09:28:27.959228 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-02-04 09:28:27.959242 | orchestrator | Tuesday 04 February 2025 09:28:07 +0000 (0:00:01.025) 0:00:47.465 ****** 2025-02-04 09:28:27.959256 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.959271 | orchestrator | 2025-02-04 09:28:27.959285 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-02-04 09:28:27.959299 | orchestrator | Tuesday 04 February 2025 09:28:11 +0000 (0:00:04.686) 0:00:52.152 ****** 2025-02-04 09:28:27.959314 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.959328 | orchestrator | 2025-02-04 09:28:27.959342 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-02-04 09:28:27.959362 | orchestrator | Tuesday 04 February 2025 09:28:13 +0000 (0:00:01.353) 0:00:53.505 ****** 2025-02-04 09:28:27.959377 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.959391 | orchestrator | 2025-02-04 09:28:27.959406 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-02-04 09:28:27.959420 | orchestrator | Tuesday 04 February 2025 09:28:14 +0000 (0:00:01.049) 0:00:54.555 ****** 2025-02-04 09:28:27.959434 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.959449 | orchestrator | 2025-02-04 09:28:27.959463 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:28:27.959478 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.959492 | orchestrator | 2025-02-04 09:28:27.959506 | orchestrator | 2025-02-04 09:28:27.959520 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:28:27.959534 | orchestrator | Tuesday 04 February 2025 09:28:14 +0000 (0:00:00.376) 0:00:54.931 ****** 2025-02-04 09:28:27.959549 | orchestrator | =============================================================================== 2025-02-04 09:28:27.959563 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.59s 2025-02-04 09:28:27.959577 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.69s 2025-02-04 09:28:27.959591 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.87s 2025-02-04 09:28:27.959604 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.78s 2025-02-04 09:28:27.959619 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.61s 2025-02-04 09:28:27.959633 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.35s 2025-02-04 09:28:27.959647 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.05s 2025-02-04 09:28:27.959661 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.03s 2025-02-04 09:28:27.959675 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2025-02-04 09:28:27.959689 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.30s 2025-02-04 09:28:27.959703 | orchestrator | 2025-02-04 09:28:27.959718 | orchestrator | 2025-02-04 09:28:27.959753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:28:27.959768 | orchestrator | 2025-02-04 09:28:27.959782 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:28:27.959797 | orchestrator | Tuesday 04 February 2025 09:27:20 +0000 (0:00:00.399) 0:00:00.399 ****** 2025-02-04 09:28:27.959811 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-02-04 09:28:27.959825 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-02-04 09:28:27.959839 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-02-04 09:28:27.959854 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-02-04 09:28:27.959874 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-02-04 09:28:27.959888 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-02-04 09:28:27.959902 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-02-04 09:28:27.959917 | orchestrator | 2025-02-04 09:28:27.959931 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-02-04 09:28:27.959945 | orchestrator | 2025-02-04 09:28:27.959959 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-02-04 09:28:27.959973 | orchestrator | Tuesday 04 February 2025 09:27:22 +0000 (0:00:02.354) 0:00:02.753 ****** 2025-02-04 09:28:27.960002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:28:27.960019 | orchestrator | 2025-02-04 09:28:27.960033 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-02-04 09:28:27.960047 | orchestrator | Tuesday 04 February 2025 09:27:24 +0000 (0:00:02.280) 0:00:05.033 ****** 2025-02-04 09:28:27.960061 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:28:27.960075 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:28:27.960090 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.960104 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:28:27.960118 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:28:27.960138 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:28:27.960153 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:28:27.960167 | orchestrator | 2025-02-04 09:28:27.960182 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-02-04 09:28:27.960196 | orchestrator | Tuesday 04 February 2025 09:27:27 +0000 (0:00:02.486) 0:00:07.520 ****** 2025-02-04 09:28:27.960210 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:28:27.960224 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:28:27.960239 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.960253 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:28:27.960267 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:28:27.960281 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:28:27.960295 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:28:27.960318 | orchestrator | 2025-02-04 09:28:27.960332 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-02-04 09:28:27.960347 | orchestrator | Tuesday 04 February 2025 09:27:30 +0000 (0:00:03.423) 0:00:10.943 ****** 2025-02-04 09:28:27.960361 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.960376 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:28:27.960390 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:28:27.960404 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:28:27.960418 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:28:27.960432 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:28:27.960446 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:28:27.960461 | orchestrator | 2025-02-04 09:28:27.960475 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-02-04 09:28:27.960489 | orchestrator | Tuesday 04 February 2025 09:27:32 +0000 (0:00:02.338) 0:00:13.282 ****** 2025-02-04 09:28:27.960504 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:28:27.960518 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:28:27.960532 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.960546 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:28:27.960560 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:28:27.960574 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:28:27.960588 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:28:27.960602 | orchestrator | 2025-02-04 09:28:27.960621 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-02-04 09:28:27.960635 | orchestrator | Tuesday 04 February 2025 09:27:40 +0000 (0:00:07.618) 0:00:20.901 ****** 2025-02-04 09:28:27.960650 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:28:27.960669 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:28:27.960684 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:28:27.960698 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:28:27.960712 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:28:27.960726 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:28:27.960808 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.960823 | orchestrator | 2025-02-04 09:28:27.960837 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-02-04 09:28:27.960851 | orchestrator | Tuesday 04 February 2025 09:27:57 +0000 (0:00:17.126) 0:00:38.028 ****** 2025-02-04 09:28:27.960866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:28:27.960885 | orchestrator | 2025-02-04 09:28:27.960899 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-02-04 09:28:27.960913 | orchestrator | Tuesday 04 February 2025 09:27:59 +0000 (0:00:02.181) 0:00:40.209 ****** 2025-02-04 09:28:27.960928 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-02-04 09:28:27.960942 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-02-04 09:28:27.960957 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-02-04 09:28:27.960971 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-02-04 09:28:27.960985 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-02-04 09:28:27.960999 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-02-04 09:28:27.961013 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-02-04 09:28:27.961027 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-02-04 09:28:27.961042 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-02-04 09:28:27.961056 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-02-04 09:28:27.961070 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-02-04 09:28:27.961084 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-02-04 09:28:27.961098 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-02-04 09:28:27.961112 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-02-04 09:28:27.961124 | orchestrator | 2025-02-04 09:28:27.961137 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-02-04 09:28:27.961150 | orchestrator | Tuesday 04 February 2025 09:28:07 +0000 (0:00:08.016) 0:00:48.226 ****** 2025-02-04 09:28:27.961163 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.961176 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:28:27.961189 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:28:27.961201 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:28:27.961214 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:28:27.961227 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:28:27.961240 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:28:27.961252 | orchestrator | 2025-02-04 09:28:27.961265 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-02-04 09:28:27.961277 | orchestrator | Tuesday 04 February 2025 09:28:09 +0000 (0:00:01.675) 0:00:49.901 ****** 2025-02-04 09:28:27.961290 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:28:27.961303 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.961315 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:28:27.961328 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:28:27.961340 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:28:27.961353 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:28:27.961365 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:28:27.961378 | orchestrator | 2025-02-04 09:28:27.961391 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-02-04 09:28:27.961410 | orchestrator | Tuesday 04 February 2025 09:28:12 +0000 (0:00:03.010) 0:00:52.912 ****** 2025-02-04 09:28:27.961423 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:28:27.961444 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.961456 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:28:27.961469 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:28:27.961481 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:28:27.961494 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:28:27.961507 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:28:27.961519 | orchestrator | 2025-02-04 09:28:27.961532 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-02-04 09:28:27.961545 | orchestrator | Tuesday 04 February 2025 09:28:15 +0000 (0:00:03.059) 0:00:55.971 ****** 2025-02-04 09:28:27.961557 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:28:27.961570 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:28:27.961583 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:28:27.961595 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:28:27.961608 | orchestrator | ok: [testbed-manager] 2025-02-04 09:28:27.961620 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:28:27.961633 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:28:27.961646 | orchestrator | 2025-02-04 09:28:27.961658 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-02-04 09:28:27.961671 | orchestrator | Tuesday 04 February 2025 09:28:18 +0000 (0:00:02.738) 0:00:58.710 ****** 2025-02-04 09:28:27.961684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-02-04 09:28:27.961698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:28:27.961711 | orchestrator | 2025-02-04 09:28:27.961724 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-02-04 09:28:27.961760 | orchestrator | Tuesday 04 February 2025 09:28:20 +0000 (0:00:01.806) 0:01:00.517 ****** 2025-02-04 09:28:27.961773 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.961786 | orchestrator | 2025-02-04 09:28:27.961799 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-02-04 09:28:27.961811 | orchestrator | Tuesday 04 February 2025 09:28:22 +0000 (0:00:02.332) 0:01:02.849 ****** 2025-02-04 09:28:27.961823 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:28:27.961836 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:28:27.961856 | orchestrator | changed: [testbed-manager] 2025-02-04 09:28:27.961870 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:28:27.961884 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:28:27.961896 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:28:27.961908 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:28:27.961921 | orchestrator | 2025-02-04 09:28:27.961934 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:28:27.961946 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.961960 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.961972 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.961990 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.962003 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.962051 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.962066 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:28:27.962085 | orchestrator | 2025-02-04 09:28:27.962098 | orchestrator | 2025-02-04 09:28:27.962111 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:28:27.962124 | orchestrator | Tuesday 04 February 2025 09:28:25 +0000 (0:00:02.921) 0:01:05.771 ****** 2025-02-04 09:28:27.962136 | orchestrator | =============================================================================== 2025-02-04 09:28:27.962149 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.13s 2025-02-04 09:28:27.962162 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 8.02s 2025-02-04 09:28:27.962174 | orchestrator | osism.services.netdata : Add repository --------------------------------- 7.62s 2025-02-04 09:28:27.962186 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.42s 2025-02-04 09:28:27.962199 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.06s 2025-02-04 09:28:27.962211 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.01s 2025-02-04 09:28:27.962224 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.92s 2025-02-04 09:28:27.962236 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.74s 2025-02-04 09:28:27.962249 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.49s 2025-02-04 09:28:27.962266 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.35s 2025-02-04 09:28:27.962279 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.34s 2025-02-04 09:28:27.962298 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.33s 2025-02-04 09:28:27.963169 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.28s 2025-02-04 09:28:27.963274 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.18s 2025-02-04 09:28:27.963295 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.81s 2025-02-04 09:28:27.963311 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.68s 2025-02-04 09:28:27.963328 | orchestrator | 2025-02-04 09:28:27 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:27.963357 | orchestrator | 2025-02-04 09:28:27 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:27.963940 | orchestrator | 2025-02-04 09:28:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:31.026886 | orchestrator | 2025-02-04 09:28:31 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:31.028645 | orchestrator | 2025-02-04 09:28:31 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:31.030170 | orchestrator | 2025-02-04 09:28:31 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:31.033467 | orchestrator | 2025-02-04 09:28:31 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:34.090914 | orchestrator | 2025-02-04 09:28:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:34.091091 | orchestrator | 2025-02-04 09:28:34 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:34.091292 | orchestrator | 2025-02-04 09:28:34 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:34.094268 | orchestrator | 2025-02-04 09:28:34 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:34.096670 | orchestrator | 2025-02-04 09:28:34 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:34.097622 | orchestrator | 2025-02-04 09:28:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:37.155540 | orchestrator | 2025-02-04 09:28:37 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:37.158511 | orchestrator | 2025-02-04 09:28:37 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:37.162198 | orchestrator | 2025-02-04 09:28:37 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:40.213149 | orchestrator | 2025-02-04 09:28:37 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:40.213295 | orchestrator | 2025-02-04 09:28:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:40.213351 | orchestrator | 2025-02-04 09:28:40 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:43.267160 | orchestrator | 2025-02-04 09:28:40 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:43.267288 | orchestrator | 2025-02-04 09:28:40 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state STARTED 2025-02-04 09:28:43.267308 | orchestrator | 2025-02-04 09:28:40 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:43.267324 | orchestrator | 2025-02-04 09:28:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:43.267356 | orchestrator | 2025-02-04 09:28:43 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:43.268169 | orchestrator | 2025-02-04 09:28:43 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:43.268226 | orchestrator | 2025-02-04 09:28:43 | INFO  | Task 1f1c5dc7-31ef-4c35-9369-2cd8a4d8e9ea is in state SUCCESS 2025-02-04 09:28:43.268255 | orchestrator | 2025-02-04 09:28:43 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:46.330405 | orchestrator | 2025-02-04 09:28:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:46.330544 | orchestrator | 2025-02-04 09:28:46 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:46.330836 | orchestrator | 2025-02-04 09:28:46 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:46.330892 | orchestrator | 2025-02-04 09:28:46 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:46.330991 | orchestrator | 2025-02-04 09:28:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:49.399425 | orchestrator | 2025-02-04 09:28:49 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:49.417667 | orchestrator | 2025-02-04 09:28:49 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:49.430965 | orchestrator | 2025-02-04 09:28:49 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:52.478887 | orchestrator | 2025-02-04 09:28:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:52.479032 | orchestrator | 2025-02-04 09:28:52 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:52.479121 | orchestrator | 2025-02-04 09:28:52 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:52.480250 | orchestrator | 2025-02-04 09:28:52 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:52.480545 | orchestrator | 2025-02-04 09:28:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:55.525153 | orchestrator | 2025-02-04 09:28:55 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:55.525914 | orchestrator | 2025-02-04 09:28:55 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:55.525981 | orchestrator | 2025-02-04 09:28:55 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:58.579270 | orchestrator | 2025-02-04 09:28:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:28:58.579411 | orchestrator | 2025-02-04 09:28:58 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:28:58.580285 | orchestrator | 2025-02-04 09:28:58 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:28:58.581978 | orchestrator | 2025-02-04 09:28:58 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:28:58.582288 | orchestrator | 2025-02-04 09:28:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:01.639520 | orchestrator | 2025-02-04 09:29:01 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:01.641505 | orchestrator | 2025-02-04 09:29:01 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:01.643022 | orchestrator | 2025-02-04 09:29:01 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:04.699490 | orchestrator | 2025-02-04 09:29:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:04.699628 | orchestrator | 2025-02-04 09:29:04 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:04.700390 | orchestrator | 2025-02-04 09:29:04 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:04.704487 | orchestrator | 2025-02-04 09:29:04 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:07.743262 | orchestrator | 2025-02-04 09:29:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:07.743402 | orchestrator | 2025-02-04 09:29:07 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:07.744675 | orchestrator | 2025-02-04 09:29:07 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:07.747909 | orchestrator | 2025-02-04 09:29:07 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:10.788705 | orchestrator | 2025-02-04 09:29:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:10.788900 | orchestrator | 2025-02-04 09:29:10 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:10.789924 | orchestrator | 2025-02-04 09:29:10 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:10.789952 | orchestrator | 2025-02-04 09:29:10 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:10.789973 | orchestrator | 2025-02-04 09:29:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:13.820452 | orchestrator | 2025-02-04 09:29:13 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:16.848660 | orchestrator | 2025-02-04 09:29:13 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:16.848904 | orchestrator | 2025-02-04 09:29:13 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:16.848935 | orchestrator | 2025-02-04 09:29:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:16.848970 | orchestrator | 2025-02-04 09:29:16 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:16.849073 | orchestrator | 2025-02-04 09:29:16 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:16.849262 | orchestrator | 2025-02-04 09:29:16 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:16.849363 | orchestrator | 2025-02-04 09:29:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:19.887497 | orchestrator | 2025-02-04 09:29:19 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:19.889060 | orchestrator | 2025-02-04 09:29:19 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:19.889973 | orchestrator | 2025-02-04 09:29:19 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:22.930341 | orchestrator | 2025-02-04 09:29:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:22.930512 | orchestrator | 2025-02-04 09:29:22 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:25.968339 | orchestrator | 2025-02-04 09:29:22 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:25.968443 | orchestrator | 2025-02-04 09:29:22 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:25.968463 | orchestrator | 2025-02-04 09:29:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:25.968494 | orchestrator | 2025-02-04 09:29:25 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:25.968967 | orchestrator | 2025-02-04 09:29:25 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:25.970196 | orchestrator | 2025-02-04 09:29:25 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:28.999827 | orchestrator | 2025-02-04 09:29:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:28.999944 | orchestrator | 2025-02-04 09:29:28 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:29.000035 | orchestrator | 2025-02-04 09:29:28 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state STARTED 2025-02-04 09:29:29.000679 | orchestrator | 2025-02-04 09:29:28 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:32.051869 | orchestrator | 2025-02-04 09:29:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:32.052005 | orchestrator | 2025-02-04 09:29:32.052025 | orchestrator | 2025-02-04 09:29:32.052041 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-02-04 09:29:32.052063 | orchestrator | 2025-02-04 09:29:32.052085 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-02-04 09:29:32.052109 | orchestrator | Tuesday 04 February 2025 09:27:39 +0000 (0:00:00.257) 0:00:00.257 ****** 2025-02-04 09:29:32.052130 | orchestrator | ok: [testbed-manager] 2025-02-04 09:29:32.052152 | orchestrator | 2025-02-04 09:29:32.052174 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-02-04 09:29:32.052197 | orchestrator | Tuesday 04 February 2025 09:27:40 +0000 (0:00:00.966) 0:00:01.224 ****** 2025-02-04 09:29:32.052220 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-02-04 09:29:32.052242 | orchestrator | 2025-02-04 09:29:32.052263 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-02-04 09:29:32.052284 | orchestrator | Tuesday 04 February 2025 09:27:41 +0000 (0:00:00.906) 0:00:02.130 ****** 2025-02-04 09:29:32.052306 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.052328 | orchestrator | 2025-02-04 09:29:32.052350 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-02-04 09:29:32.052372 | orchestrator | Tuesday 04 February 2025 09:27:43 +0000 (0:00:01.963) 0:00:04.094 ****** 2025-02-04 09:29:32.052394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-02-04 09:29:32.052408 | orchestrator | ok: [testbed-manager] 2025-02-04 09:29:32.052445 | orchestrator | 2025-02-04 09:29:32.052459 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-02-04 09:29:32.052474 | orchestrator | Tuesday 04 February 2025 09:28:36 +0000 (0:00:52.941) 0:00:57.035 ****** 2025-02-04 09:29:32.052489 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.052504 | orchestrator | 2025-02-04 09:29:32.052518 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:29:32.052533 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:29:32.052549 | orchestrator | 2025-02-04 09:29:32.052564 | orchestrator | 2025-02-04 09:29:32.052579 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:29:32.052594 | orchestrator | Tuesday 04 February 2025 09:28:40 +0000 (0:00:03.659) 0:01:00.694 ****** 2025-02-04 09:29:32.052609 | orchestrator | =============================================================================== 2025-02-04 09:29:32.052623 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 52.94s 2025-02-04 09:29:32.052637 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.66s 2025-02-04 09:29:32.052652 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.96s 2025-02-04 09:29:32.052666 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.97s 2025-02-04 09:29:32.052680 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.91s 2025-02-04 09:29:32.052695 | orchestrator | 2025-02-04 09:29:32.052709 | orchestrator | 2025-02-04 09:29:32.052722 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-02-04 09:29:32.052734 | orchestrator | 2025-02-04 09:29:32.052747 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-04 09:29:32.052783 | orchestrator | Tuesday 04 February 2025 09:27:13 +0000 (0:00:00.384) 0:00:00.384 ****** 2025-02-04 09:29:32.052797 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:29:32.052812 | orchestrator | 2025-02-04 09:29:32.052824 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-02-04 09:29:32.052837 | orchestrator | Tuesday 04 February 2025 09:27:15 +0000 (0:00:01.811) 0:00:02.195 ****** 2025-02-04 09:29:32.052857 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052870 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052883 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.052896 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052908 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052921 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052933 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.052946 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.052958 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.052971 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052983 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-02-04 09:29:32.052996 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053010 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.053023 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053042 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.053055 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053079 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053097 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053110 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053123 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-02-04 09:29:32.053136 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-02-04 09:29:32.053148 | orchestrator | 2025-02-04 09:29:32.053161 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-02-04 09:29:32.053174 | orchestrator | Tuesday 04 February 2025 09:27:20 +0000 (0:00:05.359) 0:00:07.555 ****** 2025-02-04 09:29:32.053186 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:29:32.053205 | orchestrator | 2025-02-04 09:29:32.053218 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-02-04 09:29:32.053230 | orchestrator | Tuesday 04 February 2025 09:27:22 +0000 (0:00:01.644) 0:00:09.200 ****** 2025-02-04 09:29:32.053246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053263 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053347 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053438 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.053480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053633 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.053748 | orchestrator | 2025-02-04 09:29:32.053856 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-02-04 09:29:32.053880 | orchestrator | Tuesday 04 February 2025 09:27:27 +0000 (0:00:04.994) 0:00:14.195 ****** 2025-02-04 09:29:32.053915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.053942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.053967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.053990 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:29:32.054075 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054240 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:29:32.054262 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:29:32.054284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054345 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:29:32.054362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054424 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:29:32.054455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054513 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:29:32.054529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054589 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:29:32.054606 | orchestrator | 2025-02-04 09:29:32.054617 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-02-04 09:29:32.054628 | orchestrator | Tuesday 04 February 2025 09:27:29 +0000 (0:00:02.269) 0:00:16.464 ****** 2025-02-04 09:29:32.054638 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054663 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054689 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:29:32.054700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054738 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:29:32.054748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054805 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:29:32.054822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluent2025-02-04 09:29:32 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:32.054833 | orchestrator | 2025-02-04 09:29:32 | INFO  | Task bfa01e58-f10b-42d5-9ade-cdd4d8d16862 is in state SUCCESS 2025-02-04 09:29:32.054844 | orchestrator | d', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054915 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:29:32.054926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.054936 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:29:32.054957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.054979 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:29:32.054992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-02-04 09:29:32.055017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.055035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.055053 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:29:32.055066 | orchestrator | 2025-02-04 09:29:32.055079 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-02-04 09:29:32.055096 | orchestrator | Tuesday 04 February 2025 09:27:31 +0000 (0:00:02.313) 0:00:18.777 ****** 2025-02-04 09:29:32.055113 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:29:32.055129 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:29:32.055147 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:29:32.055164 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:29:32.055181 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:29:32.055197 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:29:32.055215 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:29:32.055231 | orchestrator | 2025-02-04 09:29:32.055253 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-02-04 09:29:32.055273 | orchestrator | Tuesday 04 February 2025 09:27:32 +0000 (0:00:01.053) 0:00:19.830 ****** 2025-02-04 09:29:32.055290 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:29:32.055308 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:29:32.055326 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:29:32.055346 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:29:32.055365 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:29:32.055383 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:29:32.055399 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:29:32.055417 | orchestrator | 2025-02-04 09:29:32.055434 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-02-04 09:29:32.055451 | orchestrator | Tuesday 04 February 2025 09:27:34 +0000 (0:00:01.174) 0:00:21.005 ****** 2025-02-04 09:29:32.055469 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:29:32.055487 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.055500 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.055510 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.055520 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.055530 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.055540 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.055551 | orchestrator | 2025-02-04 09:29:32.055561 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-02-04 09:29:32.055571 | orchestrator | Tuesday 04 February 2025 09:28:07 +0000 (0:00:33.656) 0:00:54.661 ****** 2025-02-04 09:29:32.055581 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:29:32.055592 | orchestrator | ok: [testbed-manager] 2025-02-04 09:29:32.055602 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:29:32.055621 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:29:32.055639 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:29:32.055649 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:29:32.055666 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:29:32.055676 | orchestrator | 2025-02-04 09:29:32.055687 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-04 09:29:32.055697 | orchestrator | Tuesday 04 February 2025 09:28:10 +0000 (0:00:02.771) 0:00:57.433 ****** 2025-02-04 09:29:32.055707 | orchestrator | ok: [testbed-manager] 2025-02-04 09:29:32.055717 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:29:32.055728 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:29:32.055738 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:29:32.055748 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:29:32.055788 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:29:32.055805 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:29:32.055822 | orchestrator | 2025-02-04 09:29:32.055839 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-02-04 09:29:32.055850 | orchestrator | Tuesday 04 February 2025 09:28:11 +0000 (0:00:01.300) 0:00:58.734 ****** 2025-02-04 09:29:32.055860 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:29:32.055870 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:29:32.055881 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:29:32.055891 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:29:32.055901 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:29:32.055911 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:29:32.055922 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:29:32.055932 | orchestrator | 2025-02-04 09:29:32.055942 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-02-04 09:29:32.055953 | orchestrator | Tuesday 04 February 2025 09:28:13 +0000 (0:00:01.372) 0:01:00.107 ****** 2025-02-04 09:29:32.055963 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:29:32.055973 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:29:32.055983 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:29:32.055993 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:29:32.056003 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:29:32.056014 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:29:32.056027 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:29:32.056044 | orchestrator | 2025-02-04 09:29:32.056061 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-02-04 09:29:32.056078 | orchestrator | Tuesday 04 February 2025 09:28:14 +0000 (0:00:01.275) 0:01:01.382 ****** 2025-02-04 09:29:32.056095 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056190 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056313 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.056429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056489 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.056550 | orchestrator | 2025-02-04 09:29:32.056568 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-02-04 09:29:32.056585 | orchestrator | Tuesday 04 February 2025 09:28:20 +0000 (0:00:05.537) 0:01:06.919 ****** 2025-02-04 09:29:32.056603 | orchestrator | [WARNING]: Skipped 2025-02-04 09:29:32.056621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-02-04 09:29:32.056638 | orchestrator | to this access issue: 2025-02-04 09:29:32.056652 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-02-04 09:29:32.056662 | orchestrator | directory 2025-02-04 09:29:32.056673 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-04 09:29:32.056683 | orchestrator | 2025-02-04 09:29:32.056693 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-02-04 09:29:32.056703 | orchestrator | Tuesday 04 February 2025 09:28:20 +0000 (0:00:00.620) 0:01:07.540 ****** 2025-02-04 09:29:32.056714 | orchestrator | [WARNING]: Skipped 2025-02-04 09:29:32.056724 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-02-04 09:29:32.056734 | orchestrator | to this access issue: 2025-02-04 09:29:32.056744 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-02-04 09:29:32.056789 | orchestrator | directory 2025-02-04 09:29:32.056801 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-04 09:29:32.056811 | orchestrator | 2025-02-04 09:29:32.056822 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-02-04 09:29:32.056832 | orchestrator | Tuesday 04 February 2025 09:28:21 +0000 (0:00:00.518) 0:01:08.058 ****** 2025-02-04 09:29:32.056842 | orchestrator | [WARNING]: Skipped 2025-02-04 09:29:32.056852 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-02-04 09:29:32.056863 | orchestrator | to this access issue: 2025-02-04 09:29:32.056873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-02-04 09:29:32.056883 | orchestrator | directory 2025-02-04 09:29:32.056900 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-04 09:29:32.056911 | orchestrator | 2025-02-04 09:29:32.056921 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-02-04 09:29:32.056931 | orchestrator | Tuesday 04 February 2025 09:28:21 +0000 (0:00:00.644) 0:01:08.703 ****** 2025-02-04 09:29:32.056941 | orchestrator | [WARNING]: Skipped 2025-02-04 09:29:32.056952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-02-04 09:29:32.056962 | orchestrator | to this access issue: 2025-02-04 09:29:32.056972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-02-04 09:29:32.056982 | orchestrator | directory 2025-02-04 09:29:32.056993 | orchestrator | ok: [testbed-manager -> localhost] 2025-02-04 09:29:32.057003 | orchestrator | 2025-02-04 09:29:32.057013 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-02-04 09:29:32.057024 | orchestrator | Tuesday 04 February 2025 09:28:22 +0000 (0:00:00.498) 0:01:09.202 ****** 2025-02-04 09:29:32.057034 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.057044 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.057055 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.057065 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.057075 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.057085 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.057096 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.057106 | orchestrator | 2025-02-04 09:29:32.057116 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-02-04 09:29:32.057126 | orchestrator | Tuesday 04 February 2025 09:28:27 +0000 (0:00:05.237) 0:01:14.439 ****** 2025-02-04 09:29:32.057137 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057148 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057159 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057169 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057180 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057190 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057200 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-02-04 09:29:32.057210 | orchestrator | 2025-02-04 09:29:32.057220 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-02-04 09:29:32.057231 | orchestrator | Tuesday 04 February 2025 09:28:30 +0000 (0:00:03.196) 0:01:17.636 ****** 2025-02-04 09:29:32.057241 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.057251 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.057267 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.057284 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.057300 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.057317 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.057334 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.057355 | orchestrator | 2025-02-04 09:29:32.057373 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-02-04 09:29:32.057390 | orchestrator | Tuesday 04 February 2025 09:28:33 +0000 (0:00:02.756) 0:01:20.392 ****** 2025-02-04 09:29:32.057417 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057445 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057530 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057574 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057621 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057639 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057699 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057717 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057825 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057846 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.057863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:29:32.057881 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057899 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.057917 | orchestrator | 2025-02-04 09:29:32.057934 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-02-04 09:29:32.057954 | orchestrator | Tuesday 04 February 2025 09:28:36 +0000 (0:00:02.636) 0:01:23.028 ****** 2025-02-04 09:29:32.057965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.057975 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.057986 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.057996 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.058006 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.058056 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.058069 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-02-04 09:29:32.058087 | orchestrator | 2025-02-04 09:29:32.058098 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-02-04 09:29:32.058108 | orchestrator | Tuesday 04 February 2025 09:28:39 +0000 (0:00:03.241) 0:01:26.270 ****** 2025-02-04 09:29:32.058118 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058129 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058157 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058168 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058178 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058188 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-02-04 09:29:32.058199 | orchestrator | 2025-02-04 09:29:32.058209 | orchestrator | TASK [common : Check common containers] **************************************** 2025-02-04 09:29:32.058219 | orchestrator | Tuesday 04 February 2025 09:28:42 +0000 (0:00:02.956) 0:01:29.226 ****** 2025-02-04 09:29:32.058230 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058323 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058458 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058500 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/fluentd:2024.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-02-04 09:29:32.058509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/kolla-toolbox:2024.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cron:2024.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:29:32.058569 | orchestrator | 2025-02-04 09:29:32.058578 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-02-04 09:29:32.058587 | orchestrator | Tuesday 04 February 2025 09:28:46 +0000 (0:00:04.547) 0:01:33.773 ****** 2025-02-04 09:29:32.058596 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.058605 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.058613 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.058622 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.058630 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.058639 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.058647 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.058660 | orchestrator | 2025-02-04 09:29:32.058669 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-02-04 09:29:32.058677 | orchestrator | Tuesday 04 February 2025 09:28:49 +0000 (0:00:02.345) 0:01:36.119 ****** 2025-02-04 09:29:32.058686 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.058695 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.058704 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.058716 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.058725 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.058734 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.058745 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.058779 | orchestrator | 2025-02-04 09:29:32.058795 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.058809 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:02.010) 0:01:38.129 ****** 2025-02-04 09:29:32.058822 | orchestrator | 2025-02-04 09:29:32.058836 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.058850 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.065) 0:01:38.194 ****** 2025-02-04 09:29:32.058863 | orchestrator | 2025-02-04 09:29:32.058876 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.058890 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.074) 0:01:38.269 ****** 2025-02-04 09:29:32.058904 | orchestrator | 2025-02-04 09:29:32.058917 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.058932 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.328) 0:01:38.598 ****** 2025-02-04 09:29:32.058947 | orchestrator | 2025-02-04 09:29:32.058962 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.058975 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.054) 0:01:38.652 ****** 2025-02-04 09:29:32.058988 | orchestrator | 2025-02-04 09:29:32.059002 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.059019 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.054) 0:01:38.707 ****** 2025-02-04 09:29:32.059033 | orchestrator | 2025-02-04 09:29:32.059047 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-02-04 09:29:32.059063 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.052) 0:01:38.759 ****** 2025-02-04 09:29:32.059076 | orchestrator | 2025-02-04 09:29:32.059091 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-02-04 09:29:32.059109 | orchestrator | Tuesday 04 February 2025 09:28:52 +0000 (0:00:00.281) 0:01:39.040 ****** 2025-02-04 09:29:32.059118 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.059127 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.059135 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.059144 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.059153 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.059161 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.059170 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.059179 | orchestrator | 2025-02-04 09:29:32.059192 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-02-04 09:29:32.059201 | orchestrator | Tuesday 04 February 2025 09:28:59 +0000 (0:00:07.777) 0:01:46.817 ****** 2025-02-04 09:29:32.059210 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.059218 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.059227 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.059236 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.059244 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.059253 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.059262 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.059270 | orchestrator | 2025-02-04 09:29:32.059279 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-02-04 09:29:32.059288 | orchestrator | Tuesday 04 February 2025 09:29:19 +0000 (0:00:19.243) 0:02:06.061 ****** 2025-02-04 09:29:32.059296 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:29:32.059305 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:29:32.059314 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:29:32.059322 | orchestrator | ok: [testbed-manager] 2025-02-04 09:29:32.059331 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:29:32.059340 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:29:32.059348 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:29:32.059357 | orchestrator | 2025-02-04 09:29:32.059366 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-02-04 09:29:32.059374 | orchestrator | Tuesday 04 February 2025 09:29:21 +0000 (0:00:02.245) 0:02:08.307 ****** 2025-02-04 09:29:32.059383 | orchestrator | changed: [testbed-manager] 2025-02-04 09:29:32.059392 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:29:32.059400 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:29:32.059409 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:29:32.059418 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:29:32.059426 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:29:32.059435 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:29:32.059443 | orchestrator | 2025-02-04 09:29:32.059452 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:29:32.059462 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.059472 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.059481 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.059489 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.059498 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.059507 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.059523 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-02-04 09:29:32.063640 | orchestrator | 2025-02-04 09:29:32.063676 | orchestrator | 2025-02-04 09:29:32.063686 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:29:32.063695 | orchestrator | Tuesday 04 February 2025 09:29:30 +0000 (0:00:08.844) 0:02:17.152 ****** 2025-02-04 09:29:32.063703 | orchestrator | =============================================================================== 2025-02-04 09:29:32.063711 | orchestrator | common : Ensure fluentd image is present for label check --------------- 33.66s 2025-02-04 09:29:32.063719 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 19.24s 2025-02-04 09:29:32.063727 | orchestrator | common : Restart cron container ----------------------------------------- 8.84s 2025-02-04 09:29:32.063735 | orchestrator | common : Restart fluentd container -------------------------------------- 7.78s 2025-02-04 09:29:32.063743 | orchestrator | common : Copying over config.json files for services -------------------- 5.54s 2025-02-04 09:29:32.063751 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.36s 2025-02-04 09:29:32.063777 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.24s 2025-02-04 09:29:32.063785 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.99s 2025-02-04 09:29:32.063793 | orchestrator | common : Check common containers ---------------------------------------- 4.55s 2025-02-04 09:29:32.063801 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.24s 2025-02-04 09:29:32.063809 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.20s 2025-02-04 09:29:32.063817 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.96s 2025-02-04 09:29:32.063825 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.77s 2025-02-04 09:29:32.063833 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.76s 2025-02-04 09:29:32.063841 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.64s 2025-02-04 09:29:32.063849 | orchestrator | common : Creating log volume -------------------------------------------- 2.35s 2025-02-04 09:29:32.063857 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.31s 2025-02-04 09:29:32.063865 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.27s 2025-02-04 09:29:32.063873 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.25s 2025-02-04 09:29:32.063881 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 2.01s 2025-02-04 09:29:32.063890 | orchestrator | 2025-02-04 09:29:32 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:32.063898 | orchestrator | 2025-02-04 09:29:32 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:32.063906 | orchestrator | 2025-02-04 09:29:32 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:32.063914 | orchestrator | 2025-02-04 09:29:32 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:32.063928 | orchestrator | 2025-02-04 09:29:32 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:35.113079 | orchestrator | 2025-02-04 09:29:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:35.113227 | orchestrator | 2025-02-04 09:29:35 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:35.113448 | orchestrator | 2025-02-04 09:29:35 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:35.114286 | orchestrator | 2025-02-04 09:29:35 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:35.122900 | orchestrator | 2025-02-04 09:29:35 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:35.123896 | orchestrator | 2025-02-04 09:29:35 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:35.124958 | orchestrator | 2025-02-04 09:29:35 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:38.173053 | orchestrator | 2025-02-04 09:29:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:38.173203 | orchestrator | 2025-02-04 09:29:38 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:38.173349 | orchestrator | 2025-02-04 09:29:38 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:38.173379 | orchestrator | 2025-02-04 09:29:38 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:38.174163 | orchestrator | 2025-02-04 09:29:38 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:38.174735 | orchestrator | 2025-02-04 09:29:38 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:38.175435 | orchestrator | 2025-02-04 09:29:38 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:41.217651 | orchestrator | 2025-02-04 09:29:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:41.217809 | orchestrator | 2025-02-04 09:29:41 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:41.218453 | orchestrator | 2025-02-04 09:29:41 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:41.221933 | orchestrator | 2025-02-04 09:29:41 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:41.222126 | orchestrator | 2025-02-04 09:29:41 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:41.222894 | orchestrator | 2025-02-04 09:29:41 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:41.224596 | orchestrator | 2025-02-04 09:29:41 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:44.276551 | orchestrator | 2025-02-04 09:29:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:44.276698 | orchestrator | 2025-02-04 09:29:44 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:44.278158 | orchestrator | 2025-02-04 09:29:44 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:44.278230 | orchestrator | 2025-02-04 09:29:44 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:44.279300 | orchestrator | 2025-02-04 09:29:44 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:44.280099 | orchestrator | 2025-02-04 09:29:44 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:44.281572 | orchestrator | 2025-02-04 09:29:44 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:47.329963 | orchestrator | 2025-02-04 09:29:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:47.330149 | orchestrator | 2025-02-04 09:29:47 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:47.330453 | orchestrator | 2025-02-04 09:29:47 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:47.332011 | orchestrator | 2025-02-04 09:29:47 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:47.333362 | orchestrator | 2025-02-04 09:29:47 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:47.335456 | orchestrator | 2025-02-04 09:29:47 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:47.335977 | orchestrator | 2025-02-04 09:29:47 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:50.394699 | orchestrator | 2025-02-04 09:29:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:50.394883 | orchestrator | 2025-02-04 09:29:50 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:50.400470 | orchestrator | 2025-02-04 09:29:50 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:50.409061 | orchestrator | 2025-02-04 09:29:50 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:50.413893 | orchestrator | 2025-02-04 09:29:50 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:50.425271 | orchestrator | 2025-02-04 09:29:50 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:50.429576 | orchestrator | 2025-02-04 09:29:50 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:53.487860 | orchestrator | 2025-02-04 09:29:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:53.487979 | orchestrator | 2025-02-04 09:29:53 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:53.489377 | orchestrator | 2025-02-04 09:29:53 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:53.491318 | orchestrator | 2025-02-04 09:29:53 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:53.493457 | orchestrator | 2025-02-04 09:29:53 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:53.495710 | orchestrator | 2025-02-04 09:29:53 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:53.500379 | orchestrator | 2025-02-04 09:29:53 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:56.561208 | orchestrator | 2025-02-04 09:29:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:56.561316 | orchestrator | 2025-02-04 09:29:56 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:56.563302 | orchestrator | 2025-02-04 09:29:56 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state STARTED 2025-02-04 09:29:56.564724 | orchestrator | 2025-02-04 09:29:56 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:56.567548 | orchestrator | 2025-02-04 09:29:56 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:56.570107 | orchestrator | 2025-02-04 09:29:56 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:56.572042 | orchestrator | 2025-02-04 09:29:56 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:59.609360 | orchestrator | 2025-02-04 09:29:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:29:59.609496 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:29:59.609729 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task b82aee9d-3124-409e-9e1a-b44504dbaea9 is in state SUCCESS 2025-02-04 09:29:59.609783 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:29:59.610081 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:29:59.610670 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:29:59.611240 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:29:59.612109 | orchestrator | 2025-02-04 09:29:59 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:02.658850 | orchestrator | 2025-02-04 09:29:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:02.658992 | orchestrator | 2025-02-04 09:30:02 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:02.659081 | orchestrator | 2025-02-04 09:30:02 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:02.659374 | orchestrator | 2025-02-04 09:30:02 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state STARTED 2025-02-04 09:30:02.660274 | orchestrator | 2025-02-04 09:30:02 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:02.662547 | orchestrator | 2025-02-04 09:30:02 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:02.663959 | orchestrator | 2025-02-04 09:30:02 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:05.712645 | orchestrator | 2025-02-04 09:30:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:05.712839 | orchestrator | 2025-02-04 09:30:05 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:05.714560 | orchestrator | 2025-02-04 09:30:05 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:05.715840 | orchestrator | 2025-02-04 09:30:05 | INFO  | Task aaa777a9-94a3-4bb5-b700-c27b075fa8de is in state SUCCESS 2025-02-04 09:30:05.715886 | orchestrator | 2025-02-04 09:30:05.715903 | orchestrator | 2025-02-04 09:30:05.715917 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:30:05.715950 | orchestrator | 2025-02-04 09:30:05.715966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:30:05.715985 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:00.565) 0:00:00.565 ****** 2025-02-04 09:30:05.716000 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:30:05.716016 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:30:05.716030 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:30:05.716044 | orchestrator | 2025-02-04 09:30:05.716058 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:30:05.716073 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:00.372) 0:00:00.938 ****** 2025-02-04 09:30:05.716089 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-02-04 09:30:05.716116 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-02-04 09:30:05.716142 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-02-04 09:30:05.716167 | orchestrator | 2025-02-04 09:30:05.716192 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-02-04 09:30:05.716216 | orchestrator | 2025-02-04 09:30:05.716294 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-02-04 09:30:05.716318 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:00.519) 0:00:01.458 ****** 2025-02-04 09:30:05.716342 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:30:05.716368 | orchestrator | 2025-02-04 09:30:05.716392 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-02-04 09:30:05.716416 | orchestrator | Tuesday 04 February 2025 09:29:39 +0000 (0:00:01.238) 0:00:02.696 ****** 2025-02-04 09:30:05.716439 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-04 09:30:05.716456 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-04 09:30:05.716496 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-04 09:30:05.716512 | orchestrator | 2025-02-04 09:30:05.716528 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-02-04 09:30:05.716545 | orchestrator | Tuesday 04 February 2025 09:29:41 +0000 (0:00:01.365) 0:00:04.061 ****** 2025-02-04 09:30:05.716561 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-02-04 09:30:05.716577 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-02-04 09:30:05.716594 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-02-04 09:30:05.716609 | orchestrator | 2025-02-04 09:30:05.716625 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-02-04 09:30:05.716642 | orchestrator | Tuesday 04 February 2025 09:29:44 +0000 (0:00:02.931) 0:00:06.992 ****** 2025-02-04 09:30:05.716658 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:30:05.716680 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:30:05.716694 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:30:05.716708 | orchestrator | 2025-02-04 09:30:05.716722 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-02-04 09:30:05.716736 | orchestrator | Tuesday 04 February 2025 09:29:48 +0000 (0:00:03.961) 0:00:10.954 ****** 2025-02-04 09:30:05.716750 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:30:05.716764 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:30:05.716803 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:30:05.716817 | orchestrator | 2025-02-04 09:30:05.716832 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:30:05.716846 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:30:05.716862 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:30:05.716876 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:30:05.716890 | orchestrator | 2025-02-04 09:30:05.716904 | orchestrator | 2025-02-04 09:30:05.716918 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:30:05.716932 | orchestrator | Tuesday 04 February 2025 09:29:55 +0000 (0:00:07.678) 0:00:18.632 ****** 2025-02-04 09:30:05.716946 | orchestrator | =============================================================================== 2025-02-04 09:30:05.716960 | orchestrator | memcached : Restart memcached container --------------------------------- 7.68s 2025-02-04 09:30:05.716974 | orchestrator | memcached : Check memcached container ----------------------------------- 3.96s 2025-02-04 09:30:05.716989 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.93s 2025-02-04 09:30:05.717003 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.37s 2025-02-04 09:30:05.717017 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.24s 2025-02-04 09:30:05.717031 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-02-04 09:30:05.717044 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-02-04 09:30:05.717058 | orchestrator | 2025-02-04 09:30:05.717072 | orchestrator | 2025-02-04 09:30:05.717086 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:30:05.717100 | orchestrator | 2025-02-04 09:30:05.717114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:30:05.717128 | orchestrator | Tuesday 04 February 2025 09:29:36 +0000 (0:00:00.873) 0:00:00.873 ****** 2025-02-04 09:30:05.717142 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:30:05.717156 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:30:05.717170 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:30:05.717184 | orchestrator | 2025-02-04 09:30:05.717198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:30:05.717224 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:00.885) 0:00:01.758 ****** 2025-02-04 09:30:05.717249 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-02-04 09:30:05.717263 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-02-04 09:30:05.717279 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-02-04 09:30:05.717294 | orchestrator | 2025-02-04 09:30:05.717307 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-02-04 09:30:05.717322 | orchestrator | 2025-02-04 09:30:05.717336 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-02-04 09:30:05.717355 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:00.325) 0:00:02.084 ****** 2025-02-04 09:30:05.717406 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:30:05.717423 | orchestrator | 2025-02-04 09:30:05.717460 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-02-04 09:30:05.717476 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:00.662) 0:00:02.747 ****** 2025-02-04 09:30:05.717492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717622 | orchestrator | 2025-02-04 09:30:05.717636 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-02-04 09:30:05.717651 | orchestrator | Tuesday 04 February 2025 09:29:40 +0000 (0:00:02.015) 0:00:04.762 ****** 2025-02-04 09:30:05.717665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717806 | orchestrator | 2025-02-04 09:30:05.717822 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-02-04 09:30:05.717836 | orchestrator | Tuesday 04 February 2025 09:29:43 +0000 (0:00:03.310) 0:00:08.072 ****** 2025-02-04 09:30:05.717851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.717954 | orchestrator | 2025-02-04 09:30:05.717968 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-02-04 09:30:05.717982 | orchestrator | Tuesday 04 February 2025 09:29:49 +0000 (0:00:05.086) 0:00:13.159 ****** 2025-02-04 09:30:05.717996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.718010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.718081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis:2024.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.718097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.718112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.718135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/redis-sentinel:2024.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-02-04 09:30:05.718149 | orchestrator | 2025-02-04 09:30:05.718170 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-04 09:30:05.718323 | orchestrator | Tuesday 04 February 2025 09:29:53 +0000 (0:00:04.143) 0:00:17.303 ****** 2025-02-04 09:30:05.718409 | orchestrator | 2025-02-04 09:30:05.718424 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-04 09:30:05.718435 | orchestrator | Tuesday 04 February 2025 09:29:53 +0000 (0:00:00.124) 0:00:17.427 ****** 2025-02-04 09:30:05.718446 | orchestrator | 2025-02-04 09:30:05.718453 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-02-04 09:30:05.718459 | orchestrator | Tuesday 04 February 2025 09:29:53 +0000 (0:00:00.072) 0:00:17.499 ****** 2025-02-04 09:30:05.718465 | orchestrator | 2025-02-04 09:30:05.718472 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-02-04 09:30:05.718478 | orchestrator | Tuesday 04 February 2025 09:29:53 +0000 (0:00:00.333) 0:00:17.833 ****** 2025-02-04 09:30:05.718485 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:30:05.718492 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:30:05.718498 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:30:05.718504 | orchestrator | 2025-02-04 09:30:05.718511 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-02-04 09:30:05.718517 | orchestrator | Tuesday 04 February 2025 09:29:59 +0000 (0:00:05.278) 0:00:23.112 ****** 2025-02-04 09:30:05.718523 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:30:05.718544 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:30:05.718553 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:30:05.718563 | orchestrator | 2025-02-04 09:30:05.718574 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:30:05.718584 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:30:05.718594 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:30:05.718605 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:30:05.718615 | orchestrator | 2025-02-04 09:30:05.718625 | orchestrator | 2025-02-04 09:30:05.718635 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:30:05.718646 | orchestrator | Tuesday 04 February 2025 09:30:04 +0000 (0:00:05.827) 0:00:28.940 ****** 2025-02-04 09:30:05.718656 | orchestrator | =============================================================================== 2025-02-04 09:30:05.718667 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.83s 2025-02-04 09:30:05.718700 | orchestrator | redis : Restart redis container ----------------------------------------- 5.28s 2025-02-04 09:30:05.718711 | orchestrator | redis : Copying over redis config files --------------------------------- 5.09s 2025-02-04 09:30:05.718718 | orchestrator | redis : Check redis containers ------------------------------------------ 4.14s 2025-02-04 09:30:05.718724 | orchestrator | redis : Copying over default config.json files -------------------------- 3.31s 2025-02-04 09:30:05.718730 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.02s 2025-02-04 09:30:05.718736 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2025-02-04 09:30:05.718743 | orchestrator | redis : include_tasks --------------------------------------------------- 0.66s 2025-02-04 09:30:05.718753 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.53s 2025-02-04 09:30:05.718760 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-02-04 09:30:05.718787 | orchestrator | 2025-02-04 09:30:05 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:05.718811 | orchestrator | 2025-02-04 09:30:05 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:08.750974 | orchestrator | 2025-02-04 09:30:05 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:08.751096 | orchestrator | 2025-02-04 09:30:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:08.751136 | orchestrator | 2025-02-04 09:30:08 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:08.751830 | orchestrator | 2025-02-04 09:30:08 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:08.751924 | orchestrator | 2025-02-04 09:30:08 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:08.755376 | orchestrator | 2025-02-04 09:30:08 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:08.756016 | orchestrator | 2025-02-04 09:30:08 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:11.782926 | orchestrator | 2025-02-04 09:30:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:11.783069 | orchestrator | 2025-02-04 09:30:11 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:11.786423 | orchestrator | 2025-02-04 09:30:11 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:11.786506 | orchestrator | 2025-02-04 09:30:11 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:11.786927 | orchestrator | 2025-02-04 09:30:11 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:11.786964 | orchestrator | 2025-02-04 09:30:11 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:14.819807 | orchestrator | 2025-02-04 09:30:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:14.819917 | orchestrator | 2025-02-04 09:30:14 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:14.820040 | orchestrator | 2025-02-04 09:30:14 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:14.820062 | orchestrator | 2025-02-04 09:30:14 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:14.820082 | orchestrator | 2025-02-04 09:30:14 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:14.820489 | orchestrator | 2025-02-04 09:30:14 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:17.847946 | orchestrator | 2025-02-04 09:30:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:17.848235 | orchestrator | 2025-02-04 09:30:17 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:17.848377 | orchestrator | 2025-02-04 09:30:17 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:17.848418 | orchestrator | 2025-02-04 09:30:17 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:17.849925 | orchestrator | 2025-02-04 09:30:17 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:17.850287 | orchestrator | 2025-02-04 09:30:17 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:20.915706 | orchestrator | 2025-02-04 09:30:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:20.915862 | orchestrator | 2025-02-04 09:30:20 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:20.917152 | orchestrator | 2025-02-04 09:30:20 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:20.917266 | orchestrator | 2025-02-04 09:30:20 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:20.917345 | orchestrator | 2025-02-04 09:30:20 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:20.921201 | orchestrator | 2025-02-04 09:30:20 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:23.970305 | orchestrator | 2025-02-04 09:30:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:23.970446 | orchestrator | 2025-02-04 09:30:23 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:23.970977 | orchestrator | 2025-02-04 09:30:23 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:23.971014 | orchestrator | 2025-02-04 09:30:23 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:23.971509 | orchestrator | 2025-02-04 09:30:23 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:23.972096 | orchestrator | 2025-02-04 09:30:23 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:27.022925 | orchestrator | 2025-02-04 09:30:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:27.023075 | orchestrator | 2025-02-04 09:30:27 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:27.028226 | orchestrator | 2025-02-04 09:30:27 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:27.035823 | orchestrator | 2025-02-04 09:30:27 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:27.038591 | orchestrator | 2025-02-04 09:30:27 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:27.040295 | orchestrator | 2025-02-04 09:30:27 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:30.083880 | orchestrator | 2025-02-04 09:30:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:30.084065 | orchestrator | 2025-02-04 09:30:30 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:33.156217 | orchestrator | 2025-02-04 09:30:30 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:33.156341 | orchestrator | 2025-02-04 09:30:30 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:33.156362 | orchestrator | 2025-02-04 09:30:30 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:33.156405 | orchestrator | 2025-02-04 09:30:30 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:33.156421 | orchestrator | 2025-02-04 09:30:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:33.156454 | orchestrator | 2025-02-04 09:30:33 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:33.160023 | orchestrator | 2025-02-04 09:30:33 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:33.161619 | orchestrator | 2025-02-04 09:30:33 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:33.163365 | orchestrator | 2025-02-04 09:30:33 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:33.164674 | orchestrator | 2025-02-04 09:30:33 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:36.283095 | orchestrator | 2025-02-04 09:30:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:36.283232 | orchestrator | 2025-02-04 09:30:36 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:36.289001 | orchestrator | 2025-02-04 09:30:36 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:36.292833 | orchestrator | 2025-02-04 09:30:36 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:36.300331 | orchestrator | 2025-02-04 09:30:36 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:36.304176 | orchestrator | 2025-02-04 09:30:36 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:39.351511 | orchestrator | 2025-02-04 09:30:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:39.351649 | orchestrator | 2025-02-04 09:30:39 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:39.351992 | orchestrator | 2025-02-04 09:30:39 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:39.352020 | orchestrator | 2025-02-04 09:30:39 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:39.352042 | orchestrator | 2025-02-04 09:30:39 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:39.352427 | orchestrator | 2025-02-04 09:30:39 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:42.420015 | orchestrator | 2025-02-04 09:30:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:42.420137 | orchestrator | 2025-02-04 09:30:42 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:42.432956 | orchestrator | 2025-02-04 09:30:42 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:42.440617 | orchestrator | 2025-02-04 09:30:42 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:42.440677 | orchestrator | 2025-02-04 09:30:42 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:42.440705 | orchestrator | 2025-02-04 09:30:42 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:45.510069 | orchestrator | 2025-02-04 09:30:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:45.510228 | orchestrator | 2025-02-04 09:30:45 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:45.514155 | orchestrator | 2025-02-04 09:30:45 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:45.518880 | orchestrator | 2025-02-04 09:30:45 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:45.519674 | orchestrator | 2025-02-04 09:30:45 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:45.521187 | orchestrator | 2025-02-04 09:30:45 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:48.579244 | orchestrator | 2025-02-04 09:30:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:48.579434 | orchestrator | 2025-02-04 09:30:48 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:48.579533 | orchestrator | 2025-02-04 09:30:48 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:51.644425 | orchestrator | 2025-02-04 09:30:48 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:51.644529 | orchestrator | 2025-02-04 09:30:48 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:51.644545 | orchestrator | 2025-02-04 09:30:48 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:51.644560 | orchestrator | 2025-02-04 09:30:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:51.644591 | orchestrator | 2025-02-04 09:30:51 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:51.644976 | orchestrator | 2025-02-04 09:30:51 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:51.645020 | orchestrator | 2025-02-04 09:30:51 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:51.645693 | orchestrator | 2025-02-04 09:30:51 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:51.646451 | orchestrator | 2025-02-04 09:30:51 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:51.646610 | orchestrator | 2025-02-04 09:30:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:54.724461 | orchestrator | 2025-02-04 09:30:54 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:54.724853 | orchestrator | 2025-02-04 09:30:54 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:54.724912 | orchestrator | 2025-02-04 09:30:54 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:54.724928 | orchestrator | 2025-02-04 09:30:54 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:54.724951 | orchestrator | 2025-02-04 09:30:54 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:30:57.765179 | orchestrator | 2025-02-04 09:30:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:30:57.765319 | orchestrator | 2025-02-04 09:30:57 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:30:57.770576 | orchestrator | 2025-02-04 09:30:57 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:30:57.770621 | orchestrator | 2025-02-04 09:30:57 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:30:57.770933 | orchestrator | 2025-02-04 09:30:57 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:30:57.771602 | orchestrator | 2025-02-04 09:30:57 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:00.811394 | orchestrator | 2025-02-04 09:30:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:00.811533 | orchestrator | 2025-02-04 09:31:00 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:00.814437 | orchestrator | 2025-02-04 09:31:00 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:31:00.814478 | orchestrator | 2025-02-04 09:31:00 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:00.814851 | orchestrator | 2025-02-04 09:31:00 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:00.815377 | orchestrator | 2025-02-04 09:31:00 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:03.872479 | orchestrator | 2025-02-04 09:31:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:03.872625 | orchestrator | 2025-02-04 09:31:03 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:03.873489 | orchestrator | 2025-02-04 09:31:03 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:31:03.874659 | orchestrator | 2025-02-04 09:31:03 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:03.875615 | orchestrator | 2025-02-04 09:31:03 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:03.879971 | orchestrator | 2025-02-04 09:31:03 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:06.941403 | orchestrator | 2025-02-04 09:31:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:06.941559 | orchestrator | 2025-02-04 09:31:06 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:06.942554 | orchestrator | 2025-02-04 09:31:06 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:31:06.947701 | orchestrator | 2025-02-04 09:31:06 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:10.000139 | orchestrator | 2025-02-04 09:31:06 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:10.000260 | orchestrator | 2025-02-04 09:31:06 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:10.000279 | orchestrator | 2025-02-04 09:31:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:10.000309 | orchestrator | 2025-02-04 09:31:09 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:10.006247 | orchestrator | 2025-02-04 09:31:09 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state STARTED 2025-02-04 09:31:10.011448 | orchestrator | 2025-02-04 09:31:10 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:10.014207 | orchestrator | 2025-02-04 09:31:10 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:10.018925 | orchestrator | 2025-02-04 09:31:10 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:10.019246 | orchestrator | 2025-02-04 09:31:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:13.069916 | orchestrator | 2025-02-04 09:31:13 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:13.072343 | orchestrator | 2025-02-04 09:31:13 | INFO  | Task b629ccc1-567f-4e79-920e-72260e05be89 is in state SUCCESS 2025-02-04 09:31:13.074771 | orchestrator | 2025-02-04 09:31:13.074867 | orchestrator | 2025-02-04 09:31:13.074893 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:31:13.074913 | orchestrator | 2025-02-04 09:31:13.074927 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:31:13.074942 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:01.211) 0:00:01.211 ****** 2025-02-04 09:31:13.074977 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:31:13.074994 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:31:13.075008 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:31:13.075022 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:31:13.075036 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:31:13.075049 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:31:13.075064 | orchestrator | 2025-02-04 09:31:13.075078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:31:13.075092 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:00.897) 0:00:02.109 ****** 2025-02-04 09:31:13.075107 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-04 09:31:13.075122 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-04 09:31:13.075136 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-04 09:31:13.075150 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-04 09:31:13.075163 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-04 09:31:13.075177 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-02-04 09:31:13.075191 | orchestrator | 2025-02-04 09:31:13.075205 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-02-04 09:31:13.075219 | orchestrator | 2025-02-04 09:31:13.075233 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-02-04 09:31:13.075247 | orchestrator | Tuesday 04 February 2025 09:29:39 +0000 (0:00:01.158) 0:00:03.268 ****** 2025-02-04 09:31:13.075262 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:31:13.075277 | orchestrator | 2025-02-04 09:31:13.075291 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-04 09:31:13.075305 | orchestrator | Tuesday 04 February 2025 09:29:41 +0000 (0:00:02.149) 0:00:05.417 ****** 2025-02-04 09:31:13.075319 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-04 09:31:13.075333 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-04 09:31:13.075351 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-04 09:31:13.075367 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-04 09:31:13.075384 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-04 09:31:13.075400 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-04 09:31:13.075416 | orchestrator | 2025-02-04 09:31:13.075432 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-04 09:31:13.075449 | orchestrator | Tuesday 04 February 2025 09:29:44 +0000 (0:00:02.523) 0:00:07.941 ****** 2025-02-04 09:31:13.075465 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-02-04 09:31:13.075481 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-02-04 09:31:13.075497 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-02-04 09:31:13.075512 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-02-04 09:31:13.075528 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-02-04 09:31:13.075545 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-02-04 09:31:13.075560 | orchestrator | 2025-02-04 09:31:13.075576 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-04 09:31:13.075593 | orchestrator | Tuesday 04 February 2025 09:29:47 +0000 (0:00:03.491) 0:00:11.433 ****** 2025-02-04 09:31:13.075609 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-02-04 09:31:13.075625 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:31:13.075642 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-02-04 09:31:13.075658 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:31:13.075681 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-02-04 09:31:13.075698 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:31:13.075724 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-02-04 09:31:13.075740 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:31:13.075754 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-02-04 09:31:13.075768 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:31:13.075782 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-02-04 09:31:13.075814 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:31:13.075830 | orchestrator | 2025-02-04 09:31:13.075844 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-02-04 09:31:13.075858 | orchestrator | Tuesday 04 February 2025 09:29:51 +0000 (0:00:03.894) 0:00:15.327 ****** 2025-02-04 09:31:13.075872 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:31:13.075886 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:31:13.075900 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:31:13.075914 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:31:13.075929 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:31:13.075943 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:31:13.075957 | orchestrator | 2025-02-04 09:31:13.075971 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-02-04 09:31:13.075985 | orchestrator | Tuesday 04 February 2025 09:29:53 +0000 (0:00:02.307) 0:00:17.635 ****** 2025-02-04 09:31:13.076015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076265 | orchestrator | 2025-02-04 09:31:13.076279 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-02-04 09:31:13.076293 | orchestrator | Tuesday 04 February 2025 09:29:59 +0000 (0:00:05.164) 0:00:22.799 ****** 2025-02-04 09:31:13.076308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.076545 | orchestrator | 2025-02-04 09:31:13.076559 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-02-04 09:31:13.076574 | orchestrator | Tuesday 04 February 2025 09:30:03 +0000 (0:00:04.336) 0:00:27.136 ****** 2025-02-04 09:31:13.076588 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:31:13.076602 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:31:13.076616 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:31:13.076630 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:31:13.076645 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:31:13.076658 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:31:13.076686 | orchestrator | 2025-02-04 09:31:13.076701 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-02-04 09:31:13.076715 | orchestrator | Tuesday 04 February 2025 09:30:06 +0000 (0:00:03.389) 0:00:30.525 ****** 2025-02-04 09:31:13.076729 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:31:13.076750 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:31:13.076764 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:31:13.076778 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:31:13.076791 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:31:13.076835 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:31:13.076849 | orchestrator | 2025-02-04 09:31:13.076869 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-02-04 09:31:13.076883 | orchestrator | Tuesday 04 February 2025 09:30:09 +0000 (0:00:02.437) 0:00:32.963 ****** 2025-02-04 09:31:13.076897 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:31:13.076911 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:31:13.076925 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:31:13.076939 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:31:13.076953 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:31:13.076967 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:31:13.076980 | orchestrator | 2025-02-04 09:31:13.076995 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-02-04 09:31:13.077067 | orchestrator | Tuesday 04 February 2025 09:30:11 +0000 (0:00:01.895) 0:00:34.858 ****** 2025-02-04 09:31:13.077128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-db-server:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/openvswitch-vswitchd:2024.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-02-04 09:31:13.077405 | orchestrator | 2025-02-04 09:31:13.077435 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-04 09:31:13.077451 | orchestrator | Tuesday 04 February 2025 09:30:14 +0000 (0:00:03.222) 0:00:38.081 ****** 2025-02-04 09:31:13.077465 | orchestrator | 2025-02-04 09:31:13.077479 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-04 09:31:13.077493 | orchestrator | Tuesday 04 February 2025 09:30:14 +0000 (0:00:00.345) 0:00:38.426 ****** 2025-02-04 09:31:13.077507 | orchestrator | 2025-02-04 09:31:13.077521 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-04 09:31:13.077535 | orchestrator | Tuesday 04 February 2025 09:30:15 +0000 (0:00:00.500) 0:00:38.927 ****** 2025-02-04 09:31:13.077550 | orchestrator | 2025-02-04 09:31:13.077564 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-04 09:31:13.077578 | orchestrator | Tuesday 04 February 2025 09:30:15 +0000 (0:00:00.243) 0:00:39.171 ****** 2025-02-04 09:31:13.077592 | orchestrator | 2025-02-04 09:31:13.077606 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-04 09:31:13.077620 | orchestrator | Tuesday 04 February 2025 09:30:15 +0000 (0:00:00.239) 0:00:39.410 ****** 2025-02-04 09:31:13.077634 | orchestrator | 2025-02-04 09:31:13.077648 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-02-04 09:31:13.077662 | orchestrator | Tuesday 04 February 2025 09:30:15 +0000 (0:00:00.107) 0:00:39.517 ****** 2025-02-04 09:31:13.077677 | orchestrator | 2025-02-04 09:31:13.077691 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-02-04 09:31:13.077705 | orchestrator | Tuesday 04 February 2025 09:30:16 +0000 (0:00:00.228) 0:00:39.746 ****** 2025-02-04 09:31:13.077719 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:31:13.077733 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:31:13.077747 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:31:13.077761 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:31:13.077775 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:31:13.077789 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:31:13.077829 | orchestrator | 2025-02-04 09:31:13.077844 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-02-04 09:31:13.077858 | orchestrator | Tuesday 04 February 2025 09:30:25 +0000 (0:00:09.709) 0:00:49.455 ****** 2025-02-04 09:31:13.077872 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:31:13.077886 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:31:13.077901 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:31:13.077915 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:31:13.077929 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:31:13.077943 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:31:13.077957 | orchestrator | 2025-02-04 09:31:13.077978 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-04 09:31:13.077992 | orchestrator | Tuesday 04 February 2025 09:30:30 +0000 (0:00:04.306) 0:00:53.762 ****** 2025-02-04 09:31:13.078007 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:31:13.078124 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:31:13.078163 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:31:13.078189 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:31:13.078210 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:31:13.078225 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:31:13.078239 | orchestrator | 2025-02-04 09:31:13.078253 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-02-04 09:31:13.078267 | orchestrator | Tuesday 04 February 2025 09:30:42 +0000 (0:00:12.425) 0:01:06.187 ****** 2025-02-04 09:31:13.078281 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-02-04 09:31:13.078296 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-02-04 09:31:13.078310 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-02-04 09:31:13.078325 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-02-04 09:31:13.078339 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-02-04 09:31:13.078359 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-02-04 09:31:13.078373 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-02-04 09:31:13.078387 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-02-04 09:31:13.078401 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-02-04 09:31:13.078415 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-02-04 09:31:13.078429 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-02-04 09:31:13.078443 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-04 09:31:13.078457 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-02-04 09:31:13.078471 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-04 09:31:13.078485 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-04 09:31:13.078500 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-04 09:31:13.078514 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-04 09:31:13.078528 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-02-04 09:31:13.078551 | orchestrator | 2025-02-04 09:31:13.078565 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-02-04 09:31:13.078579 | orchestrator | Tuesday 04 February 2025 09:30:53 +0000 (0:00:11.027) 0:01:17.215 ****** 2025-02-04 09:31:13.078593 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-02-04 09:31:13.078607 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:31:13.078622 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-02-04 09:31:13.078635 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:31:13.078650 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-02-04 09:31:13.078664 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:31:13.078678 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-02-04 09:31:13.078692 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-02-04 09:31:13.078707 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-02-04 09:31:13.078721 | orchestrator | 2025-02-04 09:31:13.078735 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-02-04 09:31:13.078749 | orchestrator | Tuesday 04 February 2025 09:30:56 +0000 (0:00:03.033) 0:01:20.248 ****** 2025-02-04 09:31:13.078763 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-02-04 09:31:13.078777 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:31:13.078791 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-02-04 09:31:13.078828 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:31:13.078843 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-02-04 09:31:13.078858 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:31:13.078872 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-02-04 09:31:13.078886 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-02-04 09:31:13.078900 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-02-04 09:31:13.078913 | orchestrator | 2025-02-04 09:31:13.078928 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-02-04 09:31:13.078942 | orchestrator | Tuesday 04 February 2025 09:31:00 +0000 (0:00:04.398) 0:01:24.647 ****** 2025-02-04 09:31:13.078965 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:31:16.129877 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:31:16.130115 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:31:16.130159 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:31:16.130185 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:31:16.130210 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:31:16.130236 | orchestrator | 2025-02-04 09:31:16.130265 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:31:16.130293 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:31:16.130321 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:31:16.130347 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:31:16.130373 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:31:16.130399 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:31:16.130450 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-02-04 09:31:16.130476 | orchestrator | 2025-02-04 09:31:16.130503 | orchestrator | 2025-02-04 09:31:16.130528 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:31:16.130602 | orchestrator | Tuesday 04 February 2025 09:31:11 +0000 (0:00:10.256) 0:01:34.903 ****** 2025-02-04 09:31:16.130630 | orchestrator | =============================================================================== 2025-02-04 09:31:16.130655 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 22.68s 2025-02-04 09:31:16.130679 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 11.02s 2025-02-04 09:31:16.130705 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.71s 2025-02-04 09:31:16.130731 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 5.16s 2025-02-04 09:31:16.130755 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.40s 2025-02-04 09:31:16.130781 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.34s 2025-02-04 09:31:16.130829 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 4.31s 2025-02-04 09:31:16.130853 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.89s 2025-02-04 09:31:16.130877 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.49s 2025-02-04 09:31:16.130897 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.39s 2025-02-04 09:31:16.130918 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.22s 2025-02-04 09:31:16.130938 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.03s 2025-02-04 09:31:16.130958 | orchestrator | module-load : Load modules ---------------------------------------------- 2.52s 2025-02-04 09:31:16.130978 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.44s 2025-02-04 09:31:16.131002 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.31s 2025-02-04 09:31:16.131023 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.15s 2025-02-04 09:31:16.131044 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.90s 2025-02-04 09:31:16.131064 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.67s 2025-02-04 09:31:16.131083 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2025-02-04 09:31:16.131103 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-02-04 09:31:16.131123 | orchestrator | 2025-02-04 09:31:13 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:16.131143 | orchestrator | 2025-02-04 09:31:13 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:16.131164 | orchestrator | 2025-02-04 09:31:13 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:16.131184 | orchestrator | 2025-02-04 09:31:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:16.131230 | orchestrator | 2025-02-04 09:31:16 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:16.131750 | orchestrator | 2025-02-04 09:31:16 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:16.131841 | orchestrator | 2025-02-04 09:31:16 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:19.196827 | orchestrator | 2025-02-04 09:31:16 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:19.196990 | orchestrator | 2025-02-04 09:31:16 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:19.197016 | orchestrator | 2025-02-04 09:31:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:19.197051 | orchestrator | 2025-02-04 09:31:19 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:19.197735 | orchestrator | 2025-02-04 09:31:19 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:19.197768 | orchestrator | 2025-02-04 09:31:19 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:19.198208 | orchestrator | 2025-02-04 09:31:19 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:19.199513 | orchestrator | 2025-02-04 09:31:19 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:22.237281 | orchestrator | 2025-02-04 09:31:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:22.237397 | orchestrator | 2025-02-04 09:31:22 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:22.237609 | orchestrator | 2025-02-04 09:31:22 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:22.238796 | orchestrator | 2025-02-04 09:31:22 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:22.238895 | orchestrator | 2025-02-04 09:31:22 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:22.239839 | orchestrator | 2025-02-04 09:31:22 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:25.278424 | orchestrator | 2025-02-04 09:31:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:25.278561 | orchestrator | 2025-02-04 09:31:25 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:25.281989 | orchestrator | 2025-02-04 09:31:25 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:25.282072 | orchestrator | 2025-02-04 09:31:25 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:25.282534 | orchestrator | 2025-02-04 09:31:25 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:25.284370 | orchestrator | 2025-02-04 09:31:25 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:28.323887 | orchestrator | 2025-02-04 09:31:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:28.324049 | orchestrator | 2025-02-04 09:31:28 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:28.325578 | orchestrator | 2025-02-04 09:31:28 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:28.328602 | orchestrator | 2025-02-04 09:31:28 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:28.329729 | orchestrator | 2025-02-04 09:31:28 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:28.331026 | orchestrator | 2025-02-04 09:31:28 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:28.331361 | orchestrator | 2025-02-04 09:31:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:31.381543 | orchestrator | 2025-02-04 09:31:31 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:31.383855 | orchestrator | 2025-02-04 09:31:31 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:31.384423 | orchestrator | 2025-02-04 09:31:31 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:31.384456 | orchestrator | 2025-02-04 09:31:31 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:31.386446 | orchestrator | 2025-02-04 09:31:31 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:34.435374 | orchestrator | 2025-02-04 09:31:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:34.435545 | orchestrator | 2025-02-04 09:31:34 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:34.435636 | orchestrator | 2025-02-04 09:31:34 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:34.435660 | orchestrator | 2025-02-04 09:31:34 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:34.436441 | orchestrator | 2025-02-04 09:31:34 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:34.437502 | orchestrator | 2025-02-04 09:31:34 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:37.492152 | orchestrator | 2025-02-04 09:31:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:37.492292 | orchestrator | 2025-02-04 09:31:37 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:37.493710 | orchestrator | 2025-02-04 09:31:37 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:37.494144 | orchestrator | 2025-02-04 09:31:37 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:37.494640 | orchestrator | 2025-02-04 09:31:37 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:37.495286 | orchestrator | 2025-02-04 09:31:37 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:40.538796 | orchestrator | 2025-02-04 09:31:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:40.538991 | orchestrator | 2025-02-04 09:31:40 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:40.539117 | orchestrator | 2025-02-04 09:31:40 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:40.540324 | orchestrator | 2025-02-04 09:31:40 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:40.541232 | orchestrator | 2025-02-04 09:31:40 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:40.542055 | orchestrator | 2025-02-04 09:31:40 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:43.592568 | orchestrator | 2025-02-04 09:31:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:43.592676 | orchestrator | 2025-02-04 09:31:43 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:43.594412 | orchestrator | 2025-02-04 09:31:43 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:43.598270 | orchestrator | 2025-02-04 09:31:43 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:43.599424 | orchestrator | 2025-02-04 09:31:43 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:43.603266 | orchestrator | 2025-02-04 09:31:43 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:46.650377 | orchestrator | 2025-02-04 09:31:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:46.650525 | orchestrator | 2025-02-04 09:31:46 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:46.653254 | orchestrator | 2025-02-04 09:31:46 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:46.657186 | orchestrator | 2025-02-04 09:31:46 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:46.657270 | orchestrator | 2025-02-04 09:31:46 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:49.692203 | orchestrator | 2025-02-04 09:31:46 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:49.692335 | orchestrator | 2025-02-04 09:31:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:49.692377 | orchestrator | 2025-02-04 09:31:49 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:49.694256 | orchestrator | 2025-02-04 09:31:49 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:49.694298 | orchestrator | 2025-02-04 09:31:49 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:52.728179 | orchestrator | 2025-02-04 09:31:49 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:52.728383 | orchestrator | 2025-02-04 09:31:49 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:52.728409 | orchestrator | 2025-02-04 09:31:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:52.728462 | orchestrator | 2025-02-04 09:31:52 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:52.730777 | orchestrator | 2025-02-04 09:31:52 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:52.730934 | orchestrator | 2025-02-04 09:31:52 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:52.731006 | orchestrator | 2025-02-04 09:31:52 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:52.734915 | orchestrator | 2025-02-04 09:31:52 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:55.778196 | orchestrator | 2025-02-04 09:31:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:55.778315 | orchestrator | 2025-02-04 09:31:55 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:55.783206 | orchestrator | 2025-02-04 09:31:55 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:55.783952 | orchestrator | 2025-02-04 09:31:55 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:55.784144 | orchestrator | 2025-02-04 09:31:55 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:55.785044 | orchestrator | 2025-02-04 09:31:55 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:31:58.858681 | orchestrator | 2025-02-04 09:31:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:31:58.858879 | orchestrator | 2025-02-04 09:31:58 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:31:58.861599 | orchestrator | 2025-02-04 09:31:58 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:31:58.864171 | orchestrator | 2025-02-04 09:31:58 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:31:58.864647 | orchestrator | 2025-02-04 09:31:58 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:31:58.868802 | orchestrator | 2025-02-04 09:31:58 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:01.911457 | orchestrator | 2025-02-04 09:31:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:01.911600 | orchestrator | 2025-02-04 09:32:01 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:01.913325 | orchestrator | 2025-02-04 09:32:01 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:01.913390 | orchestrator | 2025-02-04 09:32:01 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:01.914644 | orchestrator | 2025-02-04 09:32:01 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:32:01.917932 | orchestrator | 2025-02-04 09:32:01 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:04.985074 | orchestrator | 2025-02-04 09:32:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:04.985175 | orchestrator | 2025-02-04 09:32:04 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:04.985603 | orchestrator | 2025-02-04 09:32:04 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:04.988053 | orchestrator | 2025-02-04 09:32:04 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:04.988751 | orchestrator | 2025-02-04 09:32:04 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:32:04.990417 | orchestrator | 2025-02-04 09:32:04 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:08.043005 | orchestrator | 2025-02-04 09:32:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:08.043164 | orchestrator | 2025-02-04 09:32:08 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:08.044441 | orchestrator | 2025-02-04 09:32:08 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:08.045607 | orchestrator | 2025-02-04 09:32:08 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:08.046212 | orchestrator | 2025-02-04 09:32:08 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state STARTED 2025-02-04 09:32:08.048622 | orchestrator | 2025-02-04 09:32:08 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:11.078610 | orchestrator | 2025-02-04 09:32:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:11.078686 | orchestrator | 2025-02-04 09:32:11 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:11.079968 | orchestrator | 2025-02-04 09:32:11 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:11.080460 | orchestrator | 2025-02-04 09:32:11 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:11.082644 | orchestrator | 2025-02-04 09:32:11.084648 | orchestrator | 2025-02-04 09:32:11 | INFO  | Task 0987ddec-e5d4-49f8-8a32-0f1e0038e8a5 is in state SUCCESS 2025-02-04 09:32:11.084710 | orchestrator | 2025-02-04 09:32:11.084726 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-02-04 09:32:11.084741 | orchestrator | 2025-02-04 09:32:11.084755 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-02-04 09:32:11.084770 | orchestrator | Tuesday 04 February 2025 09:28:07 +0000 (0:00:00.430) 0:00:00.430 ****** 2025-02-04 09:32:11.084784 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:32:11.084800 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:32:11.084815 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:32:11.084885 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.084901 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.084915 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.084930 | orchestrator | 2025-02-04 09:32:11.084944 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-02-04 09:32:11.084958 | orchestrator | Tuesday 04 February 2025 09:28:09 +0000 (0:00:02.095) 0:00:02.526 ****** 2025-02-04 09:32:11.084973 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.084989 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.085056 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.085081 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.085104 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.085126 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.085150 | orchestrator | 2025-02-04 09:32:11.085176 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-02-04 09:32:11.085201 | orchestrator | Tuesday 04 February 2025 09:28:11 +0000 (0:00:01.270) 0:00:03.796 ****** 2025-02-04 09:32:11.085225 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.085251 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.085278 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.085304 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.085323 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.085337 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.085351 | orchestrator | 2025-02-04 09:32:11.085366 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-02-04 09:32:11.085380 | orchestrator | Tuesday 04 February 2025 09:28:12 +0000 (0:00:01.406) 0:00:05.203 ****** 2025-02-04 09:32:11.085395 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:32:11.085409 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:32:11.085423 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:32:11.085437 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.085451 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.085465 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.085479 | orchestrator | 2025-02-04 09:32:11.085493 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-02-04 09:32:11.085518 | orchestrator | Tuesday 04 February 2025 09:28:14 +0000 (0:00:02.357) 0:00:07.560 ****** 2025-02-04 09:32:11.085539 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:32:11.085563 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:32:11.085586 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.085610 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.085634 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:32:11.085657 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.085681 | orchestrator | 2025-02-04 09:32:11.085705 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-02-04 09:32:11.085731 | orchestrator | Tuesday 04 February 2025 09:28:17 +0000 (0:00:02.489) 0:00:10.050 ****** 2025-02-04 09:32:11.085755 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:32:11.085780 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:32:11.085803 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:32:11.085860 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.085894 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.085920 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.085942 | orchestrator | 2025-02-04 09:32:11.085967 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-02-04 09:32:11.085991 | orchestrator | Tuesday 04 February 2025 09:28:18 +0000 (0:00:01.376) 0:00:11.427 ****** 2025-02-04 09:32:11.086083 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.086116 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.086140 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.086165 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.086189 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.086212 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.086235 | orchestrator | 2025-02-04 09:32:11.086258 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-02-04 09:32:11.086280 | orchestrator | Tuesday 04 February 2025 09:28:19 +0000 (0:00:00.982) 0:00:12.409 ****** 2025-02-04 09:32:11.086305 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.086329 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.086354 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.086376 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.086400 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.086445 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.086470 | orchestrator | 2025-02-04 09:32:11.086494 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-02-04 09:32:11.086519 | orchestrator | Tuesday 04 February 2025 09:28:20 +0000 (0:00:00.819) 0:00:13.229 ****** 2025-02-04 09:32:11.086544 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-04 09:32:11.086569 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-04 09:32:11.086593 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.086610 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-04 09:32:11.086624 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-04 09:32:11.086638 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.086652 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-04 09:32:11.086667 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-04 09:32:11.086681 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.086695 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-04 09:32:11.086724 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-04 09:32:11.086740 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.086754 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-04 09:32:11.086768 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-04 09:32:11.086782 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.086796 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-02-04 09:32:11.086810 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-02-04 09:32:11.086997 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.087019 | orchestrator | 2025-02-04 09:32:11.087034 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-02-04 09:32:11.087048 | orchestrator | Tuesday 04 February 2025 09:28:21 +0000 (0:00:00.646) 0:00:13.876 ****** 2025-02-04 09:32:11.087062 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.087076 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.087090 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.087104 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.087118 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.087132 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.087146 | orchestrator | 2025-02-04 09:32:11.087161 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-02-04 09:32:11.087176 | orchestrator | Tuesday 04 February 2025 09:28:22 +0000 (0:00:01.189) 0:00:15.065 ****** 2025-02-04 09:32:11.087191 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:32:11.087205 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:32:11.087219 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:32:11.087231 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.087243 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.087256 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.087268 | orchestrator | 2025-02-04 09:32:11.087280 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-02-04 09:32:11.087293 | orchestrator | Tuesday 04 February 2025 09:28:23 +0000 (0:00:01.434) 0:00:16.499 ****** 2025-02-04 09:32:11.087305 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:32:11.087318 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.087331 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:32:11.087343 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.087356 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:32:11.087368 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.087381 | orchestrator | 2025-02-04 09:32:11.087394 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-02-04 09:32:11.087416 | orchestrator | Tuesday 04 February 2025 09:28:29 +0000 (0:00:06.241) 0:00:22.740 ****** 2025-02-04 09:32:11.087428 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.087441 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.087453 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.087466 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.087479 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.087491 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.087504 | orchestrator | 2025-02-04 09:32:11.087516 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-02-04 09:32:11.087529 | orchestrator | Tuesday 04 February 2025 09:28:31 +0000 (0:00:01.086) 0:00:23.827 ****** 2025-02-04 09:32:11.087541 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.087554 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.087566 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.087578 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.087591 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.087603 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.087616 | orchestrator | 2025-02-04 09:32:11.087629 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-02-04 09:32:11.087642 | orchestrator | Tuesday 04 February 2025 09:28:32 +0000 (0:00:01.761) 0:00:25.588 ****** 2025-02-04 09:32:11.087655 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.087672 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.087685 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.087697 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.087710 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.087722 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.087734 | orchestrator | 2025-02-04 09:32:11.087751 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-02-04 09:32:11.087764 | orchestrator | Tuesday 04 February 2025 09:28:33 +0000 (0:00:00.584) 0:00:26.173 ****** 2025-02-04 09:32:11.087777 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-02-04 09:32:11.087790 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-02-04 09:32:11.087802 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.087815 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-02-04 09:32:11.087852 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-02-04 09:32:11.087865 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.087878 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-02-04 09:32:11.087890 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-02-04 09:32:11.087903 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.087916 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-02-04 09:32:11.087929 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-02-04 09:32:11.087941 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.087954 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-02-04 09:32:11.087967 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-02-04 09:32:11.087979 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.087992 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-02-04 09:32:11.088008 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-02-04 09:32:11.088029 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.088049 | orchestrator | 2025-02-04 09:32:11.088068 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-02-04 09:32:11.088100 | orchestrator | Tuesday 04 February 2025 09:28:34 +0000 (0:00:01.225) 0:00:27.399 ****** 2025-02-04 09:32:11.088121 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.088142 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.088163 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.088194 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.088216 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.088234 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.088254 | orchestrator | 2025-02-04 09:32:11.088275 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-02-04 09:32:11.088293 | orchestrator | 2025-02-04 09:32:11.088312 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-02-04 09:32:11.088334 | orchestrator | Tuesday 04 February 2025 09:28:36 +0000 (0:00:01.481) 0:00:28.880 ****** 2025-02-04 09:32:11.088353 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.088375 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.088397 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.088418 | orchestrator | 2025-02-04 09:32:11.088439 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-02-04 09:32:11.088452 | orchestrator | Tuesday 04 February 2025 09:28:37 +0000 (0:00:01.702) 0:00:30.583 ****** 2025-02-04 09:32:11.088465 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.088477 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.088490 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.088503 | orchestrator | 2025-02-04 09:32:11.088515 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-02-04 09:32:11.088528 | orchestrator | Tuesday 04 February 2025 09:28:39 +0000 (0:00:01.225) 0:00:31.809 ****** 2025-02-04 09:32:11.088542 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.088564 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.088585 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.088606 | orchestrator | 2025-02-04 09:32:11.088628 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-02-04 09:32:11.088649 | orchestrator | Tuesday 04 February 2025 09:28:40 +0000 (0:00:01.507) 0:00:33.316 ****** 2025-02-04 09:32:11.088670 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.088691 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.088704 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.088717 | orchestrator | 2025-02-04 09:32:11.088729 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-02-04 09:32:11.088742 | orchestrator | Tuesday 04 February 2025 09:28:41 +0000 (0:00:00.980) 0:00:34.297 ****** 2025-02-04 09:32:11.088755 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.088767 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.088780 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.088792 | orchestrator | 2025-02-04 09:32:11.088805 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-02-04 09:32:11.088818 | orchestrator | Tuesday 04 February 2025 09:28:41 +0000 (0:00:00.335) 0:00:34.632 ****** 2025-02-04 09:32:11.088894 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:32:11.088909 | orchestrator | 2025-02-04 09:32:11.088921 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-02-04 09:32:11.088934 | orchestrator | Tuesday 04 February 2025 09:28:42 +0000 (0:00:00.787) 0:00:35.419 ****** 2025-02-04 09:32:11.088946 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.088959 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.088971 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.088984 | orchestrator | 2025-02-04 09:32:11.088996 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-02-04 09:32:11.089009 | orchestrator | Tuesday 04 February 2025 09:28:45 +0000 (0:00:02.695) 0:00:38.115 ****** 2025-02-04 09:32:11.089021 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.089034 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.089046 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.089059 | orchestrator | 2025-02-04 09:32:11.089071 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-02-04 09:32:11.089084 | orchestrator | Tuesday 04 February 2025 09:28:46 +0000 (0:00:00.843) 0:00:38.959 ****** 2025-02-04 09:32:11.089096 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.089119 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.089132 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.089145 | orchestrator | 2025-02-04 09:32:11.089158 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-02-04 09:32:11.089170 | orchestrator | Tuesday 04 February 2025 09:28:47 +0000 (0:00:00.837) 0:00:39.797 ****** 2025-02-04 09:32:11.089183 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.089195 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.089208 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.089220 | orchestrator | 2025-02-04 09:32:11.089233 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-02-04 09:32:11.089245 | orchestrator | Tuesday 04 February 2025 09:28:49 +0000 (0:00:02.133) 0:00:41.931 ****** 2025-02-04 09:32:11.089261 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.089281 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.089302 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.089322 | orchestrator | 2025-02-04 09:32:11.089345 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-02-04 09:32:11.089359 | orchestrator | Tuesday 04 February 2025 09:28:49 +0000 (0:00:00.519) 0:00:42.450 ****** 2025-02-04 09:32:11.089371 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.089384 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.089397 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.089409 | orchestrator | 2025-02-04 09:32:11.089427 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-02-04 09:32:11.089444 | orchestrator | Tuesday 04 February 2025 09:28:50 +0000 (0:00:00.414) 0:00:42.865 ****** 2025-02-04 09:32:11.089461 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.089486 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.089503 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.089520 | orchestrator | 2025-02-04 09:32:11.089538 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-02-04 09:32:11.089555 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:01.769) 0:00:44.634 ****** 2025-02-04 09:32:11.089592 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-04 09:32:11.089608 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-04 09:32:11.089619 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-02-04 09:32:11.089633 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-04 09:32:11.089650 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-04 09:32:11.089667 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-02-04 09:32:11.089683 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-04 09:32:11.089699 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-04 09:32:11.089716 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-02-04 09:32:11.089733 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-04 09:32:11.089756 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-04 09:32:11.089783 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-02-04 09:32:11.089799 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.089815 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.089852 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.089870 | orchestrator | 2025-02-04 09:32:11.089888 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-02-04 09:32:11.089906 | orchestrator | Tuesday 04 February 2025 09:29:36 +0000 (0:00:45.080) 0:01:29.714 ****** 2025-02-04 09:32:11.089924 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.089941 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.089957 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.089976 | orchestrator | 2025-02-04 09:32:11.089993 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-02-04 09:32:11.090011 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:00.418) 0:01:30.133 ****** 2025-02-04 09:32:11.090074 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090085 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090095 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090105 | orchestrator | 2025-02-04 09:32:11.090116 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-02-04 09:32:11.090126 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:01.519) 0:01:31.652 ****** 2025-02-04 09:32:11.090136 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090146 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090157 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090167 | orchestrator | 2025-02-04 09:32:11.090177 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-02-04 09:32:11.090187 | orchestrator | Tuesday 04 February 2025 09:29:40 +0000 (0:00:01.187) 0:01:32.840 ****** 2025-02-04 09:32:11.090197 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090208 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090218 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090229 | orchestrator | 2025-02-04 09:32:11.090239 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-02-04 09:32:11.090249 | orchestrator | Tuesday 04 February 2025 09:29:57 +0000 (0:00:17.317) 0:01:50.158 ****** 2025-02-04 09:32:11.090260 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.090270 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.090280 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.090290 | orchestrator | 2025-02-04 09:32:11.090301 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-02-04 09:32:11.090311 | orchestrator | Tuesday 04 February 2025 09:29:58 +0000 (0:00:01.070) 0:01:51.228 ****** 2025-02-04 09:32:11.090321 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.090331 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.090341 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.090352 | orchestrator | 2025-02-04 09:32:11.090367 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-02-04 09:32:11.090385 | orchestrator | Tuesday 04 February 2025 09:29:59 +0000 (0:00:00.878) 0:01:52.106 ****** 2025-02-04 09:32:11.090404 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090422 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090439 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090457 | orchestrator | 2025-02-04 09:32:11.090475 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-02-04 09:32:11.090494 | orchestrator | Tuesday 04 February 2025 09:30:00 +0000 (0:00:00.750) 0:01:52.857 ****** 2025-02-04 09:32:11.090505 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.090515 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.090529 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.090546 | orchestrator | 2025-02-04 09:32:11.090562 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-02-04 09:32:11.090590 | orchestrator | Tuesday 04 February 2025 09:30:01 +0000 (0:00:00.907) 0:01:53.765 ****** 2025-02-04 09:32:11.090620 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.090639 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.090654 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.090665 | orchestrator | 2025-02-04 09:32:11.090675 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-02-04 09:32:11.090686 | orchestrator | Tuesday 04 February 2025 09:30:01 +0000 (0:00:00.324) 0:01:54.090 ****** 2025-02-04 09:32:11.090696 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090706 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090722 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090739 | orchestrator | 2025-02-04 09:32:11.090755 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-02-04 09:32:11.090771 | orchestrator | Tuesday 04 February 2025 09:30:01 +0000 (0:00:00.645) 0:01:54.735 ****** 2025-02-04 09:32:11.090787 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090802 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090817 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090855 | orchestrator | 2025-02-04 09:32:11.090871 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-02-04 09:32:11.090888 | orchestrator | Tuesday 04 February 2025 09:30:02 +0000 (0:00:00.779) 0:01:55.515 ****** 2025-02-04 09:32:11.090904 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.090921 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.090938 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.090954 | orchestrator | 2025-02-04 09:32:11.090971 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-02-04 09:32:11.090988 | orchestrator | Tuesday 04 February 2025 09:30:03 +0000 (0:00:01.088) 0:01:56.603 ****** 2025-02-04 09:32:11.091007 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:11.091023 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:11.091041 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:11.091058 | orchestrator | 2025-02-04 09:32:11.091076 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-02-04 09:32:11.091093 | orchestrator | Tuesday 04 February 2025 09:30:04 +0000 (0:00:00.867) 0:01:57.470 ****** 2025-02-04 09:32:11.091110 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.091127 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.091145 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.091162 | orchestrator | 2025-02-04 09:32:11.091180 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-02-04 09:32:11.091197 | orchestrator | Tuesday 04 February 2025 09:30:05 +0000 (0:00:00.359) 0:01:57.830 ****** 2025-02-04 09:32:11.091214 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.091232 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.091248 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.091262 | orchestrator | 2025-02-04 09:32:11.091273 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-02-04 09:32:11.091283 | orchestrator | Tuesday 04 February 2025 09:30:05 +0000 (0:00:00.396) 0:01:58.227 ****** 2025-02-04 09:32:11.091293 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.091304 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.091324 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.091336 | orchestrator | 2025-02-04 09:32:11.091351 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-02-04 09:32:11.091362 | orchestrator | Tuesday 04 February 2025 09:30:06 +0000 (0:00:00.878) 0:01:59.105 ****** 2025-02-04 09:32:11.091373 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.091384 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.091394 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.091404 | orchestrator | 2025-02-04 09:32:11.091415 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-02-04 09:32:11.091426 | orchestrator | Tuesday 04 February 2025 09:30:06 +0000 (0:00:00.596) 0:01:59.701 ****** 2025-02-04 09:32:11.091450 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-04 09:32:11.091461 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-04 09:32:11.091472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-02-04 09:32:11.091482 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-04 09:32:11.091493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-04 09:32:11.091503 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-04 09:32:11.091513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-02-04 09:32:11.091523 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-04 09:32:11.091534 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-02-04 09:32:11.091544 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-02-04 09:32:11.091554 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-04 09:32:11.091564 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-04 09:32:11.091574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-02-04 09:32:11.091585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-04 09:32:11.091595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-04 09:32:11.091605 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-02-04 09:32:11.091624 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-04 09:32:11.091635 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-04 09:32:11.091649 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-02-04 09:32:11.091660 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-02-04 09:32:11.091671 | orchestrator | 2025-02-04 09:32:11.091681 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-02-04 09:32:11.091691 | orchestrator | 2025-02-04 09:32:11.091701 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-02-04 09:32:11.091712 | orchestrator | Tuesday 04 February 2025 09:30:09 +0000 (0:00:02.953) 0:02:02.655 ****** 2025-02-04 09:32:11.091722 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:32:11.091732 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:32:11.091742 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:32:11.091753 | orchestrator | 2025-02-04 09:32:11.091763 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-02-04 09:32:11.091773 | orchestrator | Tuesday 04 February 2025 09:30:10 +0000 (0:00:00.512) 0:02:03.168 ****** 2025-02-04 09:32:11.091783 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:32:11.091793 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:32:11.091804 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:32:11.091814 | orchestrator | 2025-02-04 09:32:11.091882 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-02-04 09:32:11.091894 | orchestrator | Tuesday 04 February 2025 09:30:11 +0000 (0:00:00.613) 0:02:03.781 ****** 2025-02-04 09:32:11.091905 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:32:11.091915 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:32:11.091925 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:32:11.091935 | orchestrator | 2025-02-04 09:32:11.091952 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-02-04 09:32:11.091963 | orchestrator | Tuesday 04 February 2025 09:30:11 +0000 (0:00:00.291) 0:02:04.073 ****** 2025-02-04 09:32:11.091973 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:32:11.091983 | orchestrator | 2025-02-04 09:32:11.091994 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-02-04 09:32:11.092004 | orchestrator | Tuesday 04 February 2025 09:30:11 +0000 (0:00:00.610) 0:02:04.683 ****** 2025-02-04 09:32:11.092014 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.092024 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.092035 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.092045 | orchestrator | 2025-02-04 09:32:11.092055 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-02-04 09:32:11.092066 | orchestrator | Tuesday 04 February 2025 09:30:12 +0000 (0:00:00.376) 0:02:05.060 ****** 2025-02-04 09:32:11.092075 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.092084 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.092092 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.092101 | orchestrator | 2025-02-04 09:32:11.092110 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-02-04 09:32:11.092118 | orchestrator | Tuesday 04 February 2025 09:30:12 +0000 (0:00:00.374) 0:02:05.434 ****** 2025-02-04 09:32:11.092127 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.092136 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.092145 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.092153 | orchestrator | 2025-02-04 09:32:11.092162 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-02-04 09:32:11.092171 | orchestrator | Tuesday 04 February 2025 09:30:12 +0000 (0:00:00.315) 0:02:05.750 ****** 2025-02-04 09:32:11.092179 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:32:11.092188 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:32:11.092197 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:32:11.092205 | orchestrator | 2025-02-04 09:32:11.092214 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-02-04 09:32:11.092223 | orchestrator | Tuesday 04 February 2025 09:30:14 +0000 (0:00:01.371) 0:02:07.121 ****** 2025-02-04 09:32:11.092231 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:32:11.092240 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:32:11.092249 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:32:11.092257 | orchestrator | 2025-02-04 09:32:11.092266 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-04 09:32:11.092274 | orchestrator | 2025-02-04 09:32:11.092283 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-04 09:32:11.092292 | orchestrator | Tuesday 04 February 2025 09:30:22 +0000 (0:00:07.912) 0:02:15.033 ****** 2025-02-04 09:32:11.092300 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.092309 | orchestrator | 2025-02-04 09:32:11.092318 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-04 09:32:11.092326 | orchestrator | Tuesday 04 February 2025 09:30:22 +0000 (0:00:00.456) 0:02:15.490 ****** 2025-02-04 09:32:11.092335 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092344 | orchestrator | 2025-02-04 09:32:11.092352 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-04 09:32:11.092365 | orchestrator | Tuesday 04 February 2025 09:30:23 +0000 (0:00:00.427) 0:02:15.918 ****** 2025-02-04 09:32:11.092374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-04 09:32:11.092383 | orchestrator | 2025-02-04 09:32:11.092392 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-04 09:32:11.092401 | orchestrator | Tuesday 04 February 2025 09:30:24 +0000 (0:00:00.978) 0:02:16.896 ****** 2025-02-04 09:32:11.092409 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092418 | orchestrator | 2025-02-04 09:32:11.092431 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-04 09:32:11.092440 | orchestrator | Tuesday 04 February 2025 09:30:24 +0000 (0:00:00.732) 0:02:17.629 ****** 2025-02-04 09:32:11.092449 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092457 | orchestrator | 2025-02-04 09:32:11.092466 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-04 09:32:11.092480 | orchestrator | Tuesday 04 February 2025 09:30:25 +0000 (0:00:00.820) 0:02:18.449 ****** 2025-02-04 09:32:11.092489 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-04 09:32:11.092498 | orchestrator | 2025-02-04 09:32:11.092507 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-04 09:32:11.092515 | orchestrator | Tuesday 04 February 2025 09:30:26 +0000 (0:00:01.152) 0:02:19.602 ****** 2025-02-04 09:32:11.092524 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-04 09:32:11.092533 | orchestrator | 2025-02-04 09:32:11.092541 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-04 09:32:11.092550 | orchestrator | Tuesday 04 February 2025 09:30:27 +0000 (0:00:00.648) 0:02:20.251 ****** 2025-02-04 09:32:11.092559 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092568 | orchestrator | 2025-02-04 09:32:11.092576 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-04 09:32:11.092591 | orchestrator | Tuesday 04 February 2025 09:30:28 +0000 (0:00:00.529) 0:02:20.780 ****** 2025-02-04 09:32:11.092606 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092620 | orchestrator | 2025-02-04 09:32:11.092633 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-02-04 09:32:11.092647 | orchestrator | 2025-02-04 09:32:11.092662 | orchestrator | TASK [osism.commons.kubectl : Gather variables for each operating system] ****** 2025-02-04 09:32:11.092676 | orchestrator | Tuesday 04 February 2025 09:30:28 +0000 (0:00:00.745) 0:02:21.526 ****** 2025-02-04 09:32:11.092691 | orchestrator | [WARNING]: Found variable using reserved name: q 2025-02-04 09:32:11.092704 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.092719 | orchestrator | 2025-02-04 09:32:11.092733 | orchestrator | TASK [osism.commons.kubectl : Include distribution specific install tasks] ***** 2025-02-04 09:32:11.092748 | orchestrator | Tuesday 04 February 2025 09:30:28 +0000 (0:00:00.221) 0:02:21.748 ****** 2025-02-04 09:32:11.092763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-02-04 09:32:11.092779 | orchestrator | 2025-02-04 09:32:11.092794 | orchestrator | TASK [osism.commons.kubectl : Remove old architecture-dependent repository] **** 2025-02-04 09:32:11.092810 | orchestrator | Tuesday 04 February 2025 09:30:29 +0000 (0:00:00.469) 0:02:22.217 ****** 2025-02-04 09:32:11.092839 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.092849 | orchestrator | 2025-02-04 09:32:11.092858 | orchestrator | TASK [osism.commons.kubectl : Install apt-transport-https package] ************* 2025-02-04 09:32:11.092867 | orchestrator | Tuesday 04 February 2025 09:30:30 +0000 (0:00:01.053) 0:02:23.270 ****** 2025-02-04 09:32:11.092875 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.092884 | orchestrator | 2025-02-04 09:32:11.092893 | orchestrator | TASK [osism.commons.kubectl : Add repository gpg key] ************************** 2025-02-04 09:32:11.092902 | orchestrator | Tuesday 04 February 2025 09:30:32 +0000 (0:00:02.219) 0:02:25.490 ****** 2025-02-04 09:32:11.092910 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092919 | orchestrator | 2025-02-04 09:32:11.092928 | orchestrator | TASK [osism.commons.kubectl : Set permissions of gpg key] ********************** 2025-02-04 09:32:11.092936 | orchestrator | Tuesday 04 February 2025 09:30:33 +0000 (0:00:00.951) 0:02:26.441 ****** 2025-02-04 09:32:11.092945 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.092954 | orchestrator | 2025-02-04 09:32:11.092962 | orchestrator | TASK [osism.commons.kubectl : Add repository Debian] *************************** 2025-02-04 09:32:11.092971 | orchestrator | Tuesday 04 February 2025 09:30:34 +0000 (0:00:00.626) 0:02:27.067 ****** 2025-02-04 09:32:11.092980 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.092996 | orchestrator | 2025-02-04 09:32:11.093005 | orchestrator | TASK [osism.commons.kubectl : Install required packages] *********************** 2025-02-04 09:32:11.093014 | orchestrator | Tuesday 04 February 2025 09:30:42 +0000 (0:00:07.876) 0:02:34.944 ****** 2025-02-04 09:32:11.093022 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.093031 | orchestrator | 2025-02-04 09:32:11.093040 | orchestrator | TASK [osism.commons.kubectl : Remove kubectl symlink] ************************** 2025-02-04 09:32:11.093052 | orchestrator | Tuesday 04 February 2025 09:30:55 +0000 (0:00:13.645) 0:02:48.589 ****** 2025-02-04 09:32:11.093061 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.093070 | orchestrator | 2025-02-04 09:32:11.093079 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-02-04 09:32:11.093087 | orchestrator | 2025-02-04 09:32:11.093096 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-02-04 09:32:11.093105 | orchestrator | Tuesday 04 February 2025 09:30:56 +0000 (0:00:00.577) 0:02:49.167 ****** 2025-02-04 09:32:11.093114 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.093122 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.093131 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.093140 | orchestrator | 2025-02-04 09:32:11.093148 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-02-04 09:32:11.093157 | orchestrator | Tuesday 04 February 2025 09:30:56 +0000 (0:00:00.574) 0:02:49.742 ****** 2025-02-04 09:32:11.093166 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093174 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.093183 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.093192 | orchestrator | 2025-02-04 09:32:11.093200 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-02-04 09:32:11.093209 | orchestrator | Tuesday 04 February 2025 09:30:57 +0000 (0:00:00.307) 0:02:50.049 ****** 2025-02-04 09:32:11.093218 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:32:11.093227 | orchestrator | 2025-02-04 09:32:11.093235 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-02-04 09:32:11.093244 | orchestrator | Tuesday 04 February 2025 09:30:57 +0000 (0:00:00.616) 0:02:50.666 ****** 2025-02-04 09:32:11.093253 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093261 | orchestrator | 2025-02-04 09:32:11.093270 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-02-04 09:32:11.093279 | orchestrator | Tuesday 04 February 2025 09:30:58 +0000 (0:00:00.923) 0:02:51.589 ****** 2025-02-04 09:32:11.093293 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093303 | orchestrator | 2025-02-04 09:32:11.093312 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-02-04 09:32:11.093320 | orchestrator | Tuesday 04 February 2025 09:30:59 +0000 (0:00:00.584) 0:02:52.174 ****** 2025-02-04 09:32:11.093329 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093341 | orchestrator | 2025-02-04 09:32:11.093350 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-02-04 09:32:11.093359 | orchestrator | Tuesday 04 February 2025 09:30:59 +0000 (0:00:00.205) 0:02:52.380 ****** 2025-02-04 09:32:11.093368 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093377 | orchestrator | 2025-02-04 09:32:11.093386 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-02-04 09:32:11.093394 | orchestrator | Tuesday 04 February 2025 09:31:00 +0000 (0:00:00.983) 0:02:53.363 ****** 2025-02-04 09:32:11.093403 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093412 | orchestrator | 2025-02-04 09:32:11.093421 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-02-04 09:32:11.093429 | orchestrator | Tuesday 04 February 2025 09:31:00 +0000 (0:00:00.201) 0:02:53.565 ****** 2025-02-04 09:32:11.093438 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093447 | orchestrator | 2025-02-04 09:32:11.093456 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-02-04 09:32:11.093469 | orchestrator | Tuesday 04 February 2025 09:31:01 +0000 (0:00:00.234) 0:02:53.800 ****** 2025-02-04 09:32:11.093478 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093487 | orchestrator | 2025-02-04 09:32:11.093496 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-02-04 09:32:11.093504 | orchestrator | Tuesday 04 February 2025 09:31:01 +0000 (0:00:00.216) 0:02:54.016 ****** 2025-02-04 09:32:11.093513 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093522 | orchestrator | 2025-02-04 09:32:11.093532 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-02-04 09:32:11.093540 | orchestrator | Tuesday 04 February 2025 09:31:01 +0000 (0:00:00.208) 0:02:54.225 ****** 2025-02-04 09:32:11.093549 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093558 | orchestrator | 2025-02-04 09:32:11.093567 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-02-04 09:32:11.093575 | orchestrator | Tuesday 04 February 2025 09:31:12 +0000 (0:00:11.137) 0:03:05.362 ****** 2025-02-04 09:32:11.093584 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-02-04 09:32:11.093593 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-02-04 09:32:11.093602 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-02-04 09:32:11.093611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-02-04 09:32:11.093619 | orchestrator | 2025-02-04 09:32:11.093628 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-02-04 09:32:11.093637 | orchestrator | Tuesday 04 February 2025 09:31:45 +0000 (0:00:32.463) 0:03:37.825 ****** 2025-02-04 09:32:11.093646 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093654 | orchestrator | 2025-02-04 09:32:11.093663 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-02-04 09:32:11.093672 | orchestrator | Tuesday 04 February 2025 09:31:46 +0000 (0:00:01.836) 0:03:39.662 ****** 2025-02-04 09:32:11.093684 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093693 | orchestrator | 2025-02-04 09:32:11.093705 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-02-04 09:32:11.093714 | orchestrator | Tuesday 04 February 2025 09:31:48 +0000 (0:00:01.237) 0:03:40.899 ****** 2025-02-04 09:32:11.093722 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-04 09:32:11.093731 | orchestrator | 2025-02-04 09:32:11.093740 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-02-04 09:32:11.093749 | orchestrator | Tuesday 04 February 2025 09:31:49 +0000 (0:00:01.315) 0:03:42.215 ****** 2025-02-04 09:32:11.093757 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093769 | orchestrator | 2025-02-04 09:32:11.093783 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-02-04 09:32:11.093799 | orchestrator | Tuesday 04 February 2025 09:31:49 +0000 (0:00:00.210) 0:03:42.425 ****** 2025-02-04 09:32:11.093813 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-02-04 09:32:11.093842 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-02-04 09:32:11.093858 | orchestrator | 2025-02-04 09:32:11.093873 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-02-04 09:32:11.093888 | orchestrator | Tuesday 04 February 2025 09:31:51 +0000 (0:00:01.957) 0:03:44.382 ****** 2025-02-04 09:32:11.093902 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.093917 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.093931 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.093945 | orchestrator | 2025-02-04 09:32:11.093954 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-02-04 09:32:11.093963 | orchestrator | Tuesday 04 February 2025 09:31:52 +0000 (0:00:00.379) 0:03:44.761 ****** 2025-02-04 09:32:11.093972 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.093987 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.093995 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.094004 | orchestrator | 2025-02-04 09:32:11.094013 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-02-04 09:32:11.094051 | orchestrator | 2025-02-04 09:32:11.094060 | orchestrator | TASK [osism.commons.k9s : Gather variables for each operating system] ********** 2025-02-04 09:32:11.094068 | orchestrator | Tuesday 04 February 2025 09:31:53 +0000 (0:00:01.119) 0:03:45.881 ****** 2025-02-04 09:32:11.094077 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:11.094086 | orchestrator | 2025-02-04 09:32:11.094094 | orchestrator | TASK [osism.commons.k9s : Include distribution specific install tasks] ********* 2025-02-04 09:32:11.094103 | orchestrator | Tuesday 04 February 2025 09:31:53 +0000 (0:00:00.323) 0:03:46.205 ****** 2025-02-04 09:32:11.094119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-02-04 09:32:11.094129 | orchestrator | 2025-02-04 09:32:11.094138 | orchestrator | TASK [osism.commons.k9s : Install k9s packages] ******************************** 2025-02-04 09:32:11.094146 | orchestrator | Tuesday 04 February 2025 09:31:53 +0000 (0:00:00.307) 0:03:46.513 ****** 2025-02-04 09:32:11.094155 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:11.094164 | orchestrator | 2025-02-04 09:32:11.094173 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-02-04 09:32:11.094181 | orchestrator | 2025-02-04 09:32:11.094190 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-02-04 09:32:11.094199 | orchestrator | Tuesday 04 February 2025 09:31:58 +0000 (0:00:05.162) 0:03:51.675 ****** 2025-02-04 09:32:11.094207 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:32:11.094216 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:32:11.094225 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:32:11.094233 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:11.094242 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:11.094251 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:11.094259 | orchestrator | 2025-02-04 09:32:11.094268 | orchestrator | TASK [Manage labels] *********************************************************** 2025-02-04 09:32:11.094277 | orchestrator | Tuesday 04 February 2025 09:31:59 +0000 (0:00:00.717) 0:03:52.393 ****** 2025-02-04 09:32:11.094286 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-04 09:32:11.094295 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-04 09:32:11.094303 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-04 09:32:11.094312 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-02-04 09:32:11.094320 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-04 09:32:11.094329 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-04 09:32:11.094338 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-02-04 09:32:11.094346 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-04 09:32:11.094355 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-02-04 09:32:11.094370 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-04 09:32:11.094391 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-04 09:32:11.094407 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-04 09:32:11.094422 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-02-04 09:32:11.094436 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-04 09:32:11.094445 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-02-04 09:32:11.094459 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-04 09:32:11.094468 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-04 09:32:11.094476 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-02-04 09:32:11.094485 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-04 09:32:11.094494 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-04 09:32:11.094502 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-02-04 09:32:11.094511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-04 09:32:11.094519 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-04 09:32:11.094528 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-02-04 09:32:11.094536 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-04 09:32:11.094572 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-04 09:32:11.094582 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-02-04 09:32:11.094591 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-04 09:32:11.094600 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-04 09:32:11.094608 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-02-04 09:32:11.094617 | orchestrator | 2025-02-04 09:32:11.094626 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-02-04 09:32:11.094634 | orchestrator | Tuesday 04 February 2025 09:32:09 +0000 (0:00:09.496) 0:04:01.890 ****** 2025-02-04 09:32:11.094643 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:11.094651 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:11.094660 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:11.094669 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:11.094678 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:11.094686 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:11.094695 | orchestrator | 2025-02-04 09:32:11.094709 | orchestrator | TASK [Manage taints] *********************************************************** 2025-02-04 09:32:14.124652 | orchestrator | Tuesday 04 February 2025 09:32:09 +0000 (0:00:00.465) 0:04:02.356 ****** 2025-02-04 09:32:14.124782 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:32:14.124803 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:32:14.124819 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:32:14.124897 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:14.124913 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:14.124927 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:14.124941 | orchestrator | 2025-02-04 09:32:14.124957 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:32:14.124972 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:32:14.124989 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-02-04 09:32:14.125004 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-02-04 09:32:14.125018 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-02-04 09:32:14.125033 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-04 09:32:14.125075 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-04 09:32:14.125090 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-02-04 09:32:14.125104 | orchestrator | 2025-02-04 09:32:14.125118 | orchestrator | 2025-02-04 09:32:14.125133 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:32:14.125147 | orchestrator | Tuesday 04 February 2025 09:32:10 +0000 (0:00:00.655) 0:04:03.011 ****** 2025-02-04 09:32:14.125161 | orchestrator | =============================================================================== 2025-02-04 09:32:14.125175 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 45.08s 2025-02-04 09:32:14.125191 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 32.46s 2025-02-04 09:32:14.125205 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 17.32s 2025-02-04 09:32:14.125220 | orchestrator | osism.commons.kubectl : Install required packages ---------------------- 13.65s 2025-02-04 09:32:14.125234 | orchestrator | k3s_server_post : Install Cilium --------------------------------------- 11.14s 2025-02-04 09:32:14.125264 | orchestrator | Manage labels ----------------------------------------------------------- 9.50s 2025-02-04 09:32:14.125279 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.91s 2025-02-04 09:32:14.125293 | orchestrator | osism.commons.kubectl : Add repository Debian --------------------------- 7.88s 2025-02-04 09:32:14.125307 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.24s 2025-02-04 09:32:14.125321 | orchestrator | osism.commons.k9s : Install k9s packages -------------------------------- 5.16s 2025-02-04 09:32:14.125335 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.95s 2025-02-04 09:32:14.125350 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.70s 2025-02-04 09:32:14.125364 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.49s 2025-02-04 09:32:14.125378 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.36s 2025-02-04 09:32:14.125392 | orchestrator | osism.commons.kubectl : Install apt-transport-https package ------------- 2.22s 2025-02-04 09:32:14.125406 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.13s 2025-02-04 09:32:14.125420 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 2.10s 2025-02-04 09:32:14.125434 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.96s 2025-02-04 09:32:14.125448 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 1.84s 2025-02-04 09:32:14.125462 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.77s 2025-02-04 09:32:14.125477 | orchestrator | 2025-02-04 09:32:11 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:14.125491 | orchestrator | 2025-02-04 09:32:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:14.125522 | orchestrator | 2025-02-04 09:32:14 | INFO  | Task eb05d1bb-35bd-4ebd-b250-867645df8ceb is in state STARTED 2025-02-04 09:32:14.126643 | orchestrator | 2025-02-04 09:32:14 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:14.126684 | orchestrator | 2025-02-04 09:32:14 | INFO  | Task 9b6a9e6c-6e01-4e24-ab43-0034883e0aef is in state STARTED 2025-02-04 09:32:14.127100 | orchestrator | 2025-02-04 09:32:14 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:14.127553 | orchestrator | 2025-02-04 09:32:14 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:14.128181 | orchestrator | 2025-02-04 09:32:14 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:17.191370 | orchestrator | 2025-02-04 09:32:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:17.191517 | orchestrator | 2025-02-04 09:32:17 | INFO  | Task eb05d1bb-35bd-4ebd-b250-867645df8ceb is in state STARTED 2025-02-04 09:32:17.192081 | orchestrator | 2025-02-04 09:32:17 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:17.192115 | orchestrator | 2025-02-04 09:32:17 | INFO  | Task 9b6a9e6c-6e01-4e24-ab43-0034883e0aef is in state STARTED 2025-02-04 09:32:17.192545 | orchestrator | 2025-02-04 09:32:17 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:17.193542 | orchestrator | 2025-02-04 09:32:17 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:17.194531 | orchestrator | 2025-02-04 09:32:17 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:20.249405 | orchestrator | 2025-02-04 09:32:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:20.249631 | orchestrator | 2025-02-04 09:32:20 | INFO  | Task eb05d1bb-35bd-4ebd-b250-867645df8ceb is in state SUCCESS 2025-02-04 09:32:20.249811 | orchestrator | 2025-02-04 09:32:20 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:20.254247 | orchestrator | 2025-02-04 09:32:20 | INFO  | Task 9b6a9e6c-6e01-4e24-ab43-0034883e0aef is in state STARTED 2025-02-04 09:32:20.256640 | orchestrator | 2025-02-04 09:32:20 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:20.259717 | orchestrator | 2025-02-04 09:32:20 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:20.265336 | orchestrator | 2025-02-04 09:32:20 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:23.324172 | orchestrator | 2025-02-04 09:32:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:23.324309 | orchestrator | 2025-02-04 09:32:23 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:23.325724 | orchestrator | 2025-02-04 09:32:23 | INFO  | Task 9b6a9e6c-6e01-4e24-ab43-0034883e0aef is in state SUCCESS 2025-02-04 09:32:23.325806 | orchestrator | 2025-02-04 09:32:23 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:23.325943 | orchestrator | 2025-02-04 09:32:23 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:23.329017 | orchestrator | 2025-02-04 09:32:23 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:23.329139 | orchestrator | 2025-02-04 09:32:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:26.376236 | orchestrator | 2025-02-04 09:32:26 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:26.378262 | orchestrator | 2025-02-04 09:32:26 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:26.380733 | orchestrator | 2025-02-04 09:32:26 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:26.380781 | orchestrator | 2025-02-04 09:32:26 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:29.437133 | orchestrator | 2025-02-04 09:32:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:29.437295 | orchestrator | 2025-02-04 09:32:29 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:29.437550 | orchestrator | 2025-02-04 09:32:29 | INFO  | Task a8c189fe-3efa-4bf1-a2e1-63a40712887e is in state STARTED 2025-02-04 09:32:29.442381 | orchestrator | 2025-02-04 09:32:29 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:29.445550 | orchestrator | 2025-02-04 09:32:29 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:29.446610 | orchestrator | 2025-02-04 09:32:29 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:29.449378 | orchestrator | 2025-02-04 09:32:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:32.494253 | orchestrator | 2025-02-04 09:32:32 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:32.500010 | orchestrator | 2025-02-04 09:32:32 | INFO  | Task a8c189fe-3efa-4bf1-a2e1-63a40712887e is in state STARTED 2025-02-04 09:32:32.501034 | orchestrator | 2025-02-04 09:32:32 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:32.502132 | orchestrator | 2025-02-04 09:32:32 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:32.503568 | orchestrator | 2025-02-04 09:32:32 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state STARTED 2025-02-04 09:32:32.504282 | orchestrator | 2025-02-04 09:32:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:35.547181 | orchestrator | 2025-02-04 09:32:35 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:35.549077 | orchestrator | 2025-02-04 09:32:35 | INFO  | Task a8c189fe-3efa-4bf1-a2e1-63a40712887e is in state STARTED 2025-02-04 09:32:35.549158 | orchestrator | 2025-02-04 09:32:35 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:35.549369 | orchestrator | 2025-02-04 09:32:35 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:35.550186 | orchestrator | 2025-02-04 09:32:35 | INFO  | Task 02694c06-bc5b-45b7-8fe0-bd32e214c7cd is in state SUCCESS 2025-02-04 09:32:35.551972 | orchestrator | 2025-02-04 09:32:35.552020 | orchestrator | 2025-02-04 09:32:35.552035 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-02-04 09:32:35.552051 | orchestrator | 2025-02-04 09:32:35.552066 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-04 09:32:35.552080 | orchestrator | Tuesday 04 February 2025 09:32:14 +0000 (0:00:00.209) 0:00:00.209 ****** 2025-02-04 09:32:35.552096 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-04 09:32:35.552110 | orchestrator | 2025-02-04 09:32:35.552124 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-04 09:32:35.552138 | orchestrator | Tuesday 04 February 2025 09:32:15 +0000 (0:00:00.991) 0:00:01.201 ****** 2025-02-04 09:32:35.552153 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:35.552168 | orchestrator | 2025-02-04 09:32:35.552182 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-02-04 09:32:35.552197 | orchestrator | Tuesday 04 February 2025 09:32:17 +0000 (0:00:01.476) 0:00:02.677 ****** 2025-02-04 09:32:35.552211 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:35.552226 | orchestrator | 2025-02-04 09:32:35.552240 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:32:35.552255 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:32:35.552271 | orchestrator | 2025-02-04 09:32:35.552285 | orchestrator | 2025-02-04 09:32:35.552300 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:32:35.552314 | orchestrator | Tuesday 04 February 2025 09:32:17 +0000 (0:00:00.579) 0:00:03.257 ****** 2025-02-04 09:32:35.552349 | orchestrator | =============================================================================== 2025-02-04 09:32:35.552364 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.48s 2025-02-04 09:32:35.552378 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.99s 2025-02-04 09:32:35.552392 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.58s 2025-02-04 09:32:35.552406 | orchestrator | 2025-02-04 09:32:35.552420 | orchestrator | 2025-02-04 09:32:35.552434 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-02-04 09:32:35.552449 | orchestrator | 2025-02-04 09:32:35.552464 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-02-04 09:32:35.552480 | orchestrator | Tuesday 04 February 2025 09:32:14 +0000 (0:00:00.218) 0:00:00.218 ****** 2025-02-04 09:32:35.552496 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:35.552512 | orchestrator | 2025-02-04 09:32:35.552537 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-02-04 09:32:35.552555 | orchestrator | Tuesday 04 February 2025 09:32:15 +0000 (0:00:00.745) 0:00:00.963 ****** 2025-02-04 09:32:35.552571 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:35.552587 | orchestrator | 2025-02-04 09:32:35.552604 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-02-04 09:32:35.552621 | orchestrator | Tuesday 04 February 2025 09:32:16 +0000 (0:00:00.868) 0:00:01.832 ****** 2025-02-04 09:32:35.552637 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-02-04 09:32:35.552653 | orchestrator | 2025-02-04 09:32:35.552669 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-02-04 09:32:35.552684 | orchestrator | Tuesday 04 February 2025 09:32:17 +0000 (0:00:01.005) 0:00:02.838 ****** 2025-02-04 09:32:35.552701 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:35.552718 | orchestrator | 2025-02-04 09:32:35.552741 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-02-04 09:32:35.552766 | orchestrator | Tuesday 04 February 2025 09:32:18 +0000 (0:00:01.604) 0:00:04.442 ****** 2025-02-04 09:32:35.552788 | orchestrator | changed: [testbed-manager] 2025-02-04 09:32:35.552812 | orchestrator | 2025-02-04 09:32:35.552857 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-02-04 09:32:35.552880 | orchestrator | Tuesday 04 February 2025 09:32:19 +0000 (0:00:00.758) 0:00:05.201 ****** 2025-02-04 09:32:35.552902 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-04 09:32:35.552924 | orchestrator | 2025-02-04 09:32:35.552945 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-02-04 09:32:35.552967 | orchestrator | Tuesday 04 February 2025 09:32:20 +0000 (0:00:01.096) 0:00:06.297 ****** 2025-02-04 09:32:35.552989 | orchestrator | changed: [testbed-manager -> localhost] 2025-02-04 09:32:35.553012 | orchestrator | 2025-02-04 09:32:35.553037 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-02-04 09:32:35.553061 | orchestrator | Tuesday 04 February 2025 09:32:21 +0000 (0:00:00.751) 0:00:07.049 ****** 2025-02-04 09:32:35.553084 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:35.553099 | orchestrator | 2025-02-04 09:32:35.553113 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-02-04 09:32:35.553127 | orchestrator | Tuesday 04 February 2025 09:32:21 +0000 (0:00:00.542) 0:00:07.592 ****** 2025-02-04 09:32:35.553141 | orchestrator | ok: [testbed-manager] 2025-02-04 09:32:35.553155 | orchestrator | 2025-02-04 09:32:35.553170 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:32:35.553184 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:32:35.553199 | orchestrator | 2025-02-04 09:32:35.553213 | orchestrator | 2025-02-04 09:32:35.553227 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:32:35.553241 | orchestrator | Tuesday 04 February 2025 09:32:22 +0000 (0:00:00.390) 0:00:07.982 ****** 2025-02-04 09:32:35.553266 | orchestrator | =============================================================================== 2025-02-04 09:32:35.553280 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.60s 2025-02-04 09:32:35.553294 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.10s 2025-02-04 09:32:35.553308 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.01s 2025-02-04 09:32:35.553335 | orchestrator | Create .kube directory -------------------------------------------------- 0.87s 2025-02-04 09:32:35.553351 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.76s 2025-02-04 09:32:35.553365 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.75s 2025-02-04 09:32:35.553379 | orchestrator | Get home directory of operator user ------------------------------------- 0.75s 2025-02-04 09:32:35.553393 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.54s 2025-02-04 09:32:35.553408 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.39s 2025-02-04 09:32:35.553421 | orchestrator | 2025-02-04 09:32:35.553436 | orchestrator | 2025-02-04 09:32:35.553450 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-02-04 09:32:35.553464 | orchestrator | 2025-02-04 09:32:35.553478 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-04 09:32:35.553492 | orchestrator | Tuesday 04 February 2025 09:30:04 +0000 (0:00:00.238) 0:00:00.238 ****** 2025-02-04 09:32:35.553506 | orchestrator | ok: [localhost] => { 2025-02-04 09:32:35.553522 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-02-04 09:32:35.553536 | orchestrator | } 2025-02-04 09:32:35.553551 | orchestrator | 2025-02-04 09:32:35.553572 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-02-04 09:32:35.553586 | orchestrator | Tuesday 04 February 2025 09:30:04 +0000 (0:00:00.047) 0:00:00.285 ****** 2025-02-04 09:32:35.553602 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-02-04 09:32:35.553617 | orchestrator | ...ignoring 2025-02-04 09:32:35.553632 | orchestrator | 2025-02-04 09:32:35.553646 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-02-04 09:32:35.553661 | orchestrator | Tuesday 04 February 2025 09:30:07 +0000 (0:00:03.086) 0:00:03.372 ****** 2025-02-04 09:32:35.553675 | orchestrator | skipping: [localhost] 2025-02-04 09:32:35.553689 | orchestrator | 2025-02-04 09:32:35.553703 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-02-04 09:32:35.553717 | orchestrator | Tuesday 04 February 2025 09:30:07 +0000 (0:00:00.086) 0:00:03.459 ****** 2025-02-04 09:32:35.553732 | orchestrator | ok: [localhost] 2025-02-04 09:32:35.553746 | orchestrator | 2025-02-04 09:32:35.553760 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:32:35.553774 | orchestrator | 2025-02-04 09:32:35.553789 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:32:35.553803 | orchestrator | Tuesday 04 February 2025 09:30:07 +0000 (0:00:00.239) 0:00:03.698 ****** 2025-02-04 09:32:35.553817 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:35.553855 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:35.553876 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:35.553891 | orchestrator | 2025-02-04 09:32:35.553905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:32:35.553919 | orchestrator | Tuesday 04 February 2025 09:30:07 +0000 (0:00:00.462) 0:00:04.161 ****** 2025-02-04 09:32:35.553934 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-02-04 09:32:35.553948 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-02-04 09:32:35.553963 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-02-04 09:32:35.553977 | orchestrator | 2025-02-04 09:32:35.553991 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-02-04 09:32:35.554012 | orchestrator | 2025-02-04 09:32:35.554131 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-04 09:32:35.554146 | orchestrator | Tuesday 04 February 2025 09:30:08 +0000 (0:00:00.621) 0:00:04.782 ****** 2025-02-04 09:32:35.554161 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:32:35.554177 | orchestrator | 2025-02-04 09:32:35.554191 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-04 09:32:35.554205 | orchestrator | Tuesday 04 February 2025 09:30:09 +0000 (0:00:00.945) 0:00:05.728 ****** 2025-02-04 09:32:35.554219 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:35.554233 | orchestrator | 2025-02-04 09:32:35.554248 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-02-04 09:32:35.554261 | orchestrator | Tuesday 04 February 2025 09:30:11 +0000 (0:00:01.596) 0:00:07.324 ****** 2025-02-04 09:32:35.554276 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.554290 | orchestrator | 2025-02-04 09:32:35.554304 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-02-04 09:32:35.554318 | orchestrator | Tuesday 04 February 2025 09:30:12 +0000 (0:00:01.347) 0:00:08.672 ****** 2025-02-04 09:32:35.554332 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.554346 | orchestrator | 2025-02-04 09:32:35.554360 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-02-04 09:32:35.554375 | orchestrator | Tuesday 04 February 2025 09:30:13 +0000 (0:00:00.903) 0:00:09.575 ****** 2025-02-04 09:32:35.554389 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.554403 | orchestrator | 2025-02-04 09:32:35.554417 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-02-04 09:32:35.554431 | orchestrator | Tuesday 04 February 2025 09:30:13 +0000 (0:00:00.414) 0:00:09.989 ****** 2025-02-04 09:32:35.554445 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.554466 | orchestrator | 2025-02-04 09:32:35.554480 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-04 09:32:35.554494 | orchestrator | Tuesday 04 February 2025 09:30:14 +0000 (0:00:00.389) 0:00:10.379 ****** 2025-02-04 09:32:35.554508 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:32:35.554523 | orchestrator | 2025-02-04 09:32:35.554537 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-02-04 09:32:35.554560 | orchestrator | Tuesday 04 February 2025 09:30:15 +0000 (0:00:01.577) 0:00:11.957 ****** 2025-02-04 09:32:35.554575 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:35.554590 | orchestrator | 2025-02-04 09:32:35.554604 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-02-04 09:32:35.554618 | orchestrator | Tuesday 04 February 2025 09:30:16 +0000 (0:00:00.944) 0:00:12.901 ****** 2025-02-04 09:32:35.554632 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.554646 | orchestrator | 2025-02-04 09:32:35.554660 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-02-04 09:32:35.554674 | orchestrator | Tuesday 04 February 2025 09:30:17 +0000 (0:00:01.013) 0:00:13.915 ****** 2025-02-04 09:32:35.554688 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.554702 | orchestrator | 2025-02-04 09:32:35.554721 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-02-04 09:32:35.554735 | orchestrator | Tuesday 04 February 2025 09:30:18 +0000 (0:00:00.681) 0:00:14.596 ****** 2025-02-04 09:32:35.554754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.554782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.554798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.554814 | orchestrator | 2025-02-04 09:32:35.554828 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-02-04 09:32:35.554869 | orchestrator | Tuesday 04 February 2025 09:30:19 +0000 (0:00:01.572) 0:00:16.169 ****** 2025-02-04 09:32:35.554894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.554917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.554932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.554948 | orchestrator | 2025-02-04 09:32:35.554962 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-02-04 09:32:35.554976 | orchestrator | Tuesday 04 February 2025 09:30:21 +0000 (0:00:01.830) 0:00:18.000 ****** 2025-02-04 09:32:35.554991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-04 09:32:35.555005 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-04 09:32:35.555019 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-02-04 09:32:35.555033 | orchestrator | 2025-02-04 09:32:35.555047 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-02-04 09:32:35.555062 | orchestrator | Tuesday 04 February 2025 09:30:24 +0000 (0:00:02.555) 0:00:20.555 ****** 2025-02-04 09:32:35.555075 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-04 09:32:35.555090 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-04 09:32:35.555104 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-02-04 09:32:35.555118 | orchestrator | 2025-02-04 09:32:35.555132 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-02-04 09:32:35.555152 | orchestrator | Tuesday 04 February 2025 09:30:28 +0000 (0:00:04.509) 0:00:25.065 ****** 2025-02-04 09:32:35.555166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-04 09:32:35.555180 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-04 09:32:35.555194 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-02-04 09:32:35.555214 | orchestrator | 2025-02-04 09:32:35.555228 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-02-04 09:32:35.555242 | orchestrator | Tuesday 04 February 2025 09:30:32 +0000 (0:00:03.958) 0:00:29.023 ****** 2025-02-04 09:32:35.555256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-04 09:32:35.555270 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-04 09:32:35.555284 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-02-04 09:32:35.555298 | orchestrator | 2025-02-04 09:32:35.555312 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-02-04 09:32:35.555326 | orchestrator | Tuesday 04 February 2025 09:30:37 +0000 (0:00:04.585) 0:00:33.608 ****** 2025-02-04 09:32:35.555340 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-04 09:32:35.555354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-04 09:32:35.555368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-02-04 09:32:35.555382 | orchestrator | 2025-02-04 09:32:35.555396 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-02-04 09:32:35.555410 | orchestrator | Tuesday 04 February 2025 09:30:39 +0000 (0:00:02.254) 0:00:35.862 ****** 2025-02-04 09:32:35.555424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-04 09:32:35.555438 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-04 09:32:35.555457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-02-04 09:32:35.555472 | orchestrator | 2025-02-04 09:32:35.555486 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-02-04 09:32:35.555500 | orchestrator | Tuesday 04 February 2025 09:30:42 +0000 (0:00:02.365) 0:00:38.227 ****** 2025-02-04 09:32:35.555515 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.555529 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:35.555543 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:35.555557 | orchestrator | 2025-02-04 09:32:35.555572 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-02-04 09:32:35.555586 | orchestrator | Tuesday 04 February 2025 09:30:43 +0000 (0:00:01.628) 0:00:39.856 ****** 2025-02-04 09:32:35.555601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.555623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.555646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/rabbitmq:2024.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-02-04 09:32:35.555661 | orchestrator | 2025-02-04 09:32:35.555675 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-02-04 09:32:35.555689 | orchestrator | Tuesday 04 February 2025 09:30:47 +0000 (0:00:03.689) 0:00:43.545 ****** 2025-02-04 09:32:35.555703 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:35.555718 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:35.555732 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:35.555746 | orchestrator | 2025-02-04 09:32:35.555760 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-02-04 09:32:35.555774 | orchestrator | Tuesday 04 February 2025 09:30:48 +0000 (0:00:01.222) 0:00:44.767 ****** 2025-02-04 09:32:35.555787 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:35.555802 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:35.555816 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:35.555830 | orchestrator | 2025-02-04 09:32:35.555878 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-02-04 09:32:35.555892 | orchestrator | Tuesday 04 February 2025 09:30:53 +0000 (0:00:05.100) 0:00:49.868 ****** 2025-02-04 09:32:35.555906 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:35.555921 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:35.555935 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:35.555949 | orchestrator | 2025-02-04 09:32:35.555963 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-04 09:32:35.555977 | orchestrator | 2025-02-04 09:32:35.555991 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-04 09:32:35.556005 | orchestrator | Tuesday 04 February 2025 09:30:54 +0000 (0:00:00.459) 0:00:50.328 ****** 2025-02-04 09:32:35.556019 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:35.556033 | orchestrator | 2025-02-04 09:32:35.556052 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-04 09:32:35.556067 | orchestrator | Tuesday 04 February 2025 09:30:55 +0000 (0:00:01.010) 0:00:51.338 ****** 2025-02-04 09:32:35.556081 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:32:35.556095 | orchestrator | 2025-02-04 09:32:35.556109 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-04 09:32:35.556130 | orchestrator | Tuesday 04 February 2025 09:30:55 +0000 (0:00:00.607) 0:00:51.946 ****** 2025-02-04 09:32:35.556144 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:35.556158 | orchestrator | 2025-02-04 09:32:35.556172 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-04 09:32:35.556186 | orchestrator | Tuesday 04 February 2025 09:30:57 +0000 (0:00:01.894) 0:00:53.841 ****** 2025-02-04 09:32:35.556200 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:32:35.556214 | orchestrator | 2025-02-04 09:32:35.556229 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-04 09:32:35.556243 | orchestrator | 2025-02-04 09:32:35.556257 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-04 09:32:35.556271 | orchestrator | Tuesday 04 February 2025 09:31:51 +0000 (0:00:54.219) 0:01:48.061 ****** 2025-02-04 09:32:35.556285 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:35.556299 | orchestrator | 2025-02-04 09:32:35.556313 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-04 09:32:35.556327 | orchestrator | Tuesday 04 February 2025 09:31:52 +0000 (0:00:00.710) 0:01:48.771 ****** 2025-02-04 09:32:35.556341 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:32:35.556355 | orchestrator | 2025-02-04 09:32:35.556369 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-04 09:32:35.556383 | orchestrator | Tuesday 04 February 2025 09:31:52 +0000 (0:00:00.307) 0:01:49.079 ****** 2025-02-04 09:32:35.556398 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:35.556412 | orchestrator | 2025-02-04 09:32:35.556426 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-04 09:32:35.556440 | orchestrator | Tuesday 04 February 2025 09:31:54 +0000 (0:00:01.641) 0:01:50.720 ****** 2025-02-04 09:32:35.556454 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:32:35.556468 | orchestrator | 2025-02-04 09:32:35.556482 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-02-04 09:32:35.556496 | orchestrator | 2025-02-04 09:32:35.556510 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-02-04 09:32:35.556525 | orchestrator | Tuesday 04 February 2025 09:32:07 +0000 (0:00:13.395) 0:02:04.116 ****** 2025-02-04 09:32:35.556545 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:35.556560 | orchestrator | 2025-02-04 09:32:35.556574 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-02-04 09:32:35.556588 | orchestrator | Tuesday 04 February 2025 09:32:08 +0000 (0:00:00.612) 0:02:04.728 ****** 2025-02-04 09:32:35.556602 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:32:35.556616 | orchestrator | 2025-02-04 09:32:35.556630 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-02-04 09:32:35.556644 | orchestrator | Tuesday 04 February 2025 09:32:08 +0000 (0:00:00.361) 0:02:05.090 ****** 2025-02-04 09:32:35.556658 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:35.556672 | orchestrator | 2025-02-04 09:32:35.556686 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-02-04 09:32:35.556700 | orchestrator | Tuesday 04 February 2025 09:32:10 +0000 (0:00:01.942) 0:02:07.032 ****** 2025-02-04 09:32:35.556714 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:32:35.556728 | orchestrator | 2025-02-04 09:32:35.556742 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-02-04 09:32:35.556755 | orchestrator | 2025-02-04 09:32:35.556769 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-02-04 09:32:35.556783 | orchestrator | Tuesday 04 February 2025 09:32:27 +0000 (0:00:16.740) 0:02:23.773 ****** 2025-02-04 09:32:35.556797 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:32:35.556811 | orchestrator | 2025-02-04 09:32:35.556826 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-02-04 09:32:35.556866 | orchestrator | Tuesday 04 February 2025 09:32:29 +0000 (0:00:01.676) 0:02:25.450 ****** 2025-02-04 09:32:35.556881 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-04 09:32:35.556906 | orchestrator | enable_outward_rabbitmq_True 2025-02-04 09:32:35.556921 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-02-04 09:32:35.556935 | orchestrator | outward_rabbitmq_restart 2025-02-04 09:32:35.556949 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:32:35.556963 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:32:35.556978 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:32:35.556992 | orchestrator | 2025-02-04 09:32:35.557006 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-02-04 09:32:35.557020 | orchestrator | skipping: no hosts matched 2025-02-04 09:32:35.557035 | orchestrator | 2025-02-04 09:32:35.557053 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-02-04 09:32:35.557068 | orchestrator | skipping: no hosts matched 2025-02-04 09:32:35.557082 | orchestrator | 2025-02-04 09:32:35.557096 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-02-04 09:32:35.557110 | orchestrator | skipping: no hosts matched 2025-02-04 09:32:35.557124 | orchestrator | 2025-02-04 09:32:35.557138 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:32:35.557152 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-04 09:32:35.557167 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-04 09:32:35.557181 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:32:35.557195 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:32:35.557209 | orchestrator | 2025-02-04 09:32:35.557223 | orchestrator | 2025-02-04 09:32:35.557238 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:32:35.557252 | orchestrator | Tuesday 04 February 2025 09:32:33 +0000 (0:00:04.176) 0:02:29.627 ****** 2025-02-04 09:32:35.557266 | orchestrator | =============================================================================== 2025-02-04 09:32:35.557280 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.36s 2025-02-04 09:32:35.557293 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.48s 2025-02-04 09:32:35.557308 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.10s 2025-02-04 09:32:35.557321 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.59s 2025-02-04 09:32:35.557336 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.51s 2025-02-04 09:32:35.557349 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.18s 2025-02-04 09:32:35.557363 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.96s 2025-02-04 09:32:35.557377 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 3.69s 2025-02-04 09:32:35.557391 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.09s 2025-02-04 09:32:35.557405 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.56s 2025-02-04 09:32:35.557419 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.37s 2025-02-04 09:32:35.557433 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.33s 2025-02-04 09:32:35.557447 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.25s 2025-02-04 09:32:35.557461 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.83s 2025-02-04 09:32:35.557475 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.68s 2025-02-04 09:32:35.557490 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.63s 2025-02-04 09:32:35.557532 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.60s 2025-02-04 09:32:38.607788 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.58s 2025-02-04 09:32:38.607950 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.57s 2025-02-04 09:32:38.607971 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.35s 2025-02-04 09:32:38.607988 | orchestrator | 2025-02-04 09:32:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:38.608021 | orchestrator | 2025-02-04 09:32:38 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:38.609815 | orchestrator | 2025-02-04 09:32:38 | INFO  | Task a8c189fe-3efa-4bf1-a2e1-63a40712887e is in state STARTED 2025-02-04 09:32:38.609899 | orchestrator | 2025-02-04 09:32:38 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:38.610587 | orchestrator | 2025-02-04 09:32:38 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:41.654199 | orchestrator | 2025-02-04 09:32:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:41.654344 | orchestrator | 2025-02-04 09:32:41 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:41.654910 | orchestrator | 2025-02-04 09:32:41 | INFO  | Task a8c189fe-3efa-4bf1-a2e1-63a40712887e is in state STARTED 2025-02-04 09:32:41.655975 | orchestrator | 2025-02-04 09:32:41 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:41.656301 | orchestrator | 2025-02-04 09:32:41 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:44.685829 | orchestrator | 2025-02-04 09:32:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:44.686086 | orchestrator | 2025-02-04 09:32:44 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:44.686717 | orchestrator | 2025-02-04 09:32:44 | INFO  | Task a8c189fe-3efa-4bf1-a2e1-63a40712887e is in state SUCCESS 2025-02-04 09:32:44.687093 | orchestrator | 2025-02-04 09:32:44 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:44.687591 | orchestrator | 2025-02-04 09:32:44 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:44.689013 | orchestrator | 2025-02-04 09:32:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:47.725452 | orchestrator | 2025-02-04 09:32:47 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:50.774881 | orchestrator | 2025-02-04 09:32:47 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:50.774978 | orchestrator | 2025-02-04 09:32:47 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:50.774990 | orchestrator | 2025-02-04 09:32:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:50.775014 | orchestrator | 2025-02-04 09:32:50 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:50.775976 | orchestrator | 2025-02-04 09:32:50 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:50.777156 | orchestrator | 2025-02-04 09:32:50 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:53.821318 | orchestrator | 2025-02-04 09:32:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:53.821397 | orchestrator | 2025-02-04 09:32:53 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:53.823280 | orchestrator | 2025-02-04 09:32:53 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:53.824076 | orchestrator | 2025-02-04 09:32:53 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:56.877796 | orchestrator | 2025-02-04 09:32:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:56.878007 | orchestrator | 2025-02-04 09:32:56 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:56.878792 | orchestrator | 2025-02-04 09:32:56 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:56.879353 | orchestrator | 2025-02-04 09:32:56 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:32:59.925411 | orchestrator | 2025-02-04 09:32:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:32:59.925543 | orchestrator | 2025-02-04 09:32:59 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:32:59.931020 | orchestrator | 2025-02-04 09:32:59 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:32:59.932879 | orchestrator | 2025-02-04 09:32:59 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:02.974511 | orchestrator | 2025-02-04 09:32:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:02.974644 | orchestrator | 2025-02-04 09:33:02 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:02.975401 | orchestrator | 2025-02-04 09:33:02 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:02.976552 | orchestrator | 2025-02-04 09:33:02 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:02.976789 | orchestrator | 2025-02-04 09:33:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:06.029528 | orchestrator | 2025-02-04 09:33:06 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:06.029776 | orchestrator | 2025-02-04 09:33:06 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:06.029810 | orchestrator | 2025-02-04 09:33:06 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:09.058172 | orchestrator | 2025-02-04 09:33:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:09.058315 | orchestrator | 2025-02-04 09:33:09 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:09.058791 | orchestrator | 2025-02-04 09:33:09 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:09.059915 | orchestrator | 2025-02-04 09:33:09 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:12.101720 | orchestrator | 2025-02-04 09:33:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:12.101902 | orchestrator | 2025-02-04 09:33:12 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:12.102129 | orchestrator | 2025-02-04 09:33:12 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:12.103322 | orchestrator | 2025-02-04 09:33:12 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:12.103599 | orchestrator | 2025-02-04 09:33:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:15.152285 | orchestrator | 2025-02-04 09:33:15 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:15.152765 | orchestrator | 2025-02-04 09:33:15 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:15.152842 | orchestrator | 2025-02-04 09:33:15 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:18.200941 | orchestrator | 2025-02-04 09:33:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:18.201112 | orchestrator | 2025-02-04 09:33:18 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:18.202676 | orchestrator | 2025-02-04 09:33:18 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:18.205196 | orchestrator | 2025-02-04 09:33:18 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:18.205621 | orchestrator | 2025-02-04 09:33:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:21.258320 | orchestrator | 2025-02-04 09:33:21 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:21.258439 | orchestrator | 2025-02-04 09:33:21 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:21.260305 | orchestrator | 2025-02-04 09:33:21 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:24.304118 | orchestrator | 2025-02-04 09:33:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:24.304256 | orchestrator | 2025-02-04 09:33:24 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:24.304899 | orchestrator | 2025-02-04 09:33:24 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:24.305737 | orchestrator | 2025-02-04 09:33:24 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:27.364018 | orchestrator | 2025-02-04 09:33:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:27.364186 | orchestrator | 2025-02-04 09:33:27 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:27.372380 | orchestrator | 2025-02-04 09:33:27 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:27.372471 | orchestrator | 2025-02-04 09:33:27 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:30.438298 | orchestrator | 2025-02-04 09:33:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:30.438451 | orchestrator | 2025-02-04 09:33:30 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:30.439816 | orchestrator | 2025-02-04 09:33:30 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:30.439854 | orchestrator | 2025-02-04 09:33:30 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:33.477296 | orchestrator | 2025-02-04 09:33:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:33.477434 | orchestrator | 2025-02-04 09:33:33 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:33.479796 | orchestrator | 2025-02-04 09:33:33 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:33.484171 | orchestrator | 2025-02-04 09:33:33 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:36.516447 | orchestrator | 2025-02-04 09:33:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:36.516582 | orchestrator | 2025-02-04 09:33:36 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:39.556653 | orchestrator | 2025-02-04 09:33:36 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:39.556805 | orchestrator | 2025-02-04 09:33:36 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:39.556858 | orchestrator | 2025-02-04 09:33:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:39.556947 | orchestrator | 2025-02-04 09:33:39 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:39.559198 | orchestrator | 2025-02-04 09:33:39 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:39.559666 | orchestrator | 2025-02-04 09:33:39 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:42.601637 | orchestrator | 2025-02-04 09:33:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:42.601777 | orchestrator | 2025-02-04 09:33:42 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:42.603129 | orchestrator | 2025-02-04 09:33:42 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:42.603177 | orchestrator | 2025-02-04 09:33:42 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:45.651520 | orchestrator | 2025-02-04 09:33:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:45.651695 | orchestrator | 2025-02-04 09:33:45 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:48.689719 | orchestrator | 2025-02-04 09:33:45 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:48.689848 | orchestrator | 2025-02-04 09:33:45 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:48.689922 | orchestrator | 2025-02-04 09:33:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:48.689960 | orchestrator | 2025-02-04 09:33:48 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:48.690655 | orchestrator | 2025-02-04 09:33:48 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:48.693651 | orchestrator | 2025-02-04 09:33:48 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state STARTED 2025-02-04 09:33:51.753560 | orchestrator | 2025-02-04 09:33:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:51.753682 | orchestrator | 2025-02-04 09:33:51 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:51.756327 | orchestrator | 2025-02-04 09:33:51 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:51.756383 | orchestrator | 2025-02-04 09:33:51 | INFO  | Task 4b1e67be-67e6-4e43-821a-22f805ba6da4 is in state SUCCESS 2025-02-04 09:33:51.756397 | orchestrator | 2025-02-04 09:33:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:51.756420 | orchestrator | 2025-02-04 09:33:51.756432 | orchestrator | None 2025-02-04 09:33:51.756444 | orchestrator | 2025-02-04 09:33:51.756456 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:33:51.756468 | orchestrator | 2025-02-04 09:33:51.756479 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:33:51.756491 | orchestrator | Tuesday 04 February 2025 09:31:18 +0000 (0:00:00.411) 0:00:00.411 ****** 2025-02-04 09:33:51.756503 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.756516 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.756527 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.756538 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:33:51.756550 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:33:51.756561 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:33:51.756572 | orchestrator | 2025-02-04 09:33:51.756584 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:33:51.756596 | orchestrator | Tuesday 04 February 2025 09:31:19 +0000 (0:00:01.466) 0:00:01.878 ****** 2025-02-04 09:33:51.756627 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-02-04 09:33:51.756639 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-02-04 09:33:51.756651 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-02-04 09:33:51.756662 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-02-04 09:33:51.756673 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-02-04 09:33:51.756684 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-02-04 09:33:51.756696 | orchestrator | 2025-02-04 09:33:51.756707 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-02-04 09:33:51.756718 | orchestrator | 2025-02-04 09:33:51.756737 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-02-04 09:33:51.756749 | orchestrator | Tuesday 04 February 2025 09:31:21 +0000 (0:00:01.817) 0:00:03.695 ****** 2025-02-04 09:33:51.756761 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:33:51.756773 | orchestrator | 2025-02-04 09:33:51.756784 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-02-04 09:33:51.756796 | orchestrator | Tuesday 04 February 2025 09:31:22 +0000 (0:00:01.219) 0:00:04.915 ****** 2025-02-04 09:33:51.756808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756850 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756934 | orchestrator | 2025-02-04 09:33:51.756949 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-02-04 09:33:51.756962 | orchestrator | Tuesday 04 February 2025 09:31:24 +0000 (0:00:01.271) 0:00:06.186 ****** 2025-02-04 09:33:51.756975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.756988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757062 | orchestrator | 2025-02-04 09:33:51.757076 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-02-04 09:33:51.757088 | orchestrator | Tuesday 04 February 2025 09:31:26 +0000 (0:00:02.160) 0:00:08.347 ****** 2025-02-04 09:33:51.757101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757194 | orchestrator | 2025-02-04 09:33:51.757207 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-02-04 09:33:51.757220 | orchestrator | Tuesday 04 February 2025 09:31:28 +0000 (0:00:02.004) 0:00:10.351 ****** 2025-02-04 09:33:51.757233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757275 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757323 | orchestrator | 2025-02-04 09:33:51.757335 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-02-04 09:33:51.757346 | orchestrator | Tuesday 04 February 2025 09:31:30 +0000 (0:00:02.210) 0:00:12.562 ****** 2025-02-04 09:33:51.757358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-controller:2024.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.757431 | orchestrator | 2025-02-04 09:33:51.757448 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-02-04 09:33:51.757460 | orchestrator | Tuesday 04 February 2025 09:31:32 +0000 (0:00:02.101) 0:00:14.663 ****** 2025-02-04 09:33:51.757471 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.757484 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.757495 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.757506 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:33:51.757518 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:33:51.757538 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:33:51.757550 | orchestrator | 2025-02-04 09:33:51.757561 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-02-04 09:33:51.757573 | orchestrator | Tuesday 04 February 2025 09:31:36 +0000 (0:00:03.635) 0:00:18.299 ****** 2025-02-04 09:33:51.757584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-02-04 09:33:51.757600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-02-04 09:33:51.757611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-02-04 09:33:51.757623 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-02-04 09:33:51.757634 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-02-04 09:33:51.757650 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-02-04 09:33:51.757662 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-04 09:33:51.757673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-04 09:33:51.757684 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-04 09:33:51.757696 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-04 09:33:51.757707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-04 09:33:51.757718 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-02-04 09:33:51.757739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-04 09:33:51.757752 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-04 09:33:51.757764 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-04 09:33:51.757783 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-04 09:33:51.757795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-04 09:33:51.757807 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-02-04 09:33:51.757818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-04 09:33:51.757830 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-04 09:33:51.757841 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-04 09:33:51.757853 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-04 09:33:51.757864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-04 09:33:51.757899 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-02-04 09:33:51.757910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-04 09:33:51.757922 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-04 09:33:51.757933 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-04 09:33:51.757945 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-04 09:33:51.757956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-04 09:33:51.757968 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-02-04 09:33:51.757979 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-04 09:33:51.757990 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-04 09:33:51.758002 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-04 09:33:51.758013 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-04 09:33:51.758074 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-04 09:33:51.758086 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-02-04 09:33:51.758098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-04 09:33:51.758109 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-04 09:33:51.758121 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-04 09:33:51.758132 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-04 09:33:51.758144 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-02-04 09:33:51.758156 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-02-04 09:33:51.758173 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-02-04 09:33:51.758185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-02-04 09:33:51.758197 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-02-04 09:33:51.758208 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-02-04 09:33:51.758219 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-02-04 09:33:51.758231 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-02-04 09:33:51.758242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-04 09:33:51.758254 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-04 09:33:51.758265 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-04 09:33:51.758277 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-04 09:33:51.758294 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-02-04 09:33:51.758306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-02-04 09:33:51.758317 | orchestrator | 2025-02-04 09:33:51.758329 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-04 09:33:51.758340 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:19.998) 0:00:38.297 ****** 2025-02-04 09:33:51.758352 | orchestrator | 2025-02-04 09:33:51.758363 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-04 09:33:51.758374 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:00.065) 0:00:38.362 ****** 2025-02-04 09:33:51.758386 | orchestrator | 2025-02-04 09:33:51.758401 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-04 09:33:51.758413 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:00.286) 0:00:38.649 ****** 2025-02-04 09:33:51.758425 | orchestrator | 2025-02-04 09:33:51.758436 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-04 09:33:51.758447 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:00.064) 0:00:38.714 ****** 2025-02-04 09:33:51.758459 | orchestrator | 2025-02-04 09:33:51.758470 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-04 09:33:51.758481 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:00.089) 0:00:38.803 ****** 2025-02-04 09:33:51.758493 | orchestrator | 2025-02-04 09:33:51.758504 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-02-04 09:33:51.758515 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:00.095) 0:00:38.898 ****** 2025-02-04 09:33:51.758527 | orchestrator | 2025-02-04 09:33:51.758538 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-02-04 09:33:51.758549 | orchestrator | Tuesday 04 February 2025 09:31:57 +0000 (0:00:00.218) 0:00:39.117 ****** 2025-02-04 09:33:51.758560 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.758572 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:33:51.758583 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:33:51.758594 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:33:51.758609 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.758621 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.758632 | orchestrator | 2025-02-04 09:33:51.758643 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-02-04 09:33:51.758655 | orchestrator | Tuesday 04 February 2025 09:31:59 +0000 (0:00:02.071) 0:00:41.189 ****** 2025-02-04 09:33:51.758666 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.758678 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:33:51.758689 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:33:51.758700 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:33:51.758711 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.758723 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.758734 | orchestrator | 2025-02-04 09:33:51.758745 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-02-04 09:33:51.758756 | orchestrator | 2025-02-04 09:33:51.758768 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-04 09:33:51.758779 | orchestrator | Tuesday 04 February 2025 09:32:14 +0000 (0:00:15.150) 0:00:56.340 ****** 2025-02-04 09:33:51.758791 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:33:51.758802 | orchestrator | 2025-02-04 09:33:51.758813 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-04 09:33:51.758825 | orchestrator | Tuesday 04 February 2025 09:32:15 +0000 (0:00:00.866) 0:00:57.206 ****** 2025-02-04 09:33:51.758836 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:33:51.758853 | orchestrator | 2025-02-04 09:33:51.758864 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-02-04 09:33:51.758925 | orchestrator | Tuesday 04 February 2025 09:32:16 +0000 (0:00:01.483) 0:00:58.690 ****** 2025-02-04 09:33:51.758937 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.758948 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.758965 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.758976 | orchestrator | 2025-02-04 09:33:51.758988 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-02-04 09:33:51.758999 | orchestrator | Tuesday 04 February 2025 09:32:18 +0000 (0:00:01.335) 0:01:00.025 ****** 2025-02-04 09:33:51.759010 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.759022 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.759033 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.759044 | orchestrator | 2025-02-04 09:33:51.759056 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-02-04 09:33:51.759067 | orchestrator | Tuesday 04 February 2025 09:32:18 +0000 (0:00:00.635) 0:01:00.660 ****** 2025-02-04 09:33:51.759078 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.759089 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.759101 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.759112 | orchestrator | 2025-02-04 09:33:51.759123 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-02-04 09:33:51.759135 | orchestrator | Tuesday 04 February 2025 09:32:19 +0000 (0:00:01.038) 0:01:01.699 ****** 2025-02-04 09:33:51.759146 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.759157 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.759168 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.759179 | orchestrator | 2025-02-04 09:33:51.759191 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-02-04 09:33:51.759202 | orchestrator | Tuesday 04 February 2025 09:32:21 +0000 (0:00:01.777) 0:01:03.477 ****** 2025-02-04 09:33:51.759213 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.759224 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.759235 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.759247 | orchestrator | 2025-02-04 09:33:51.759258 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-02-04 09:33:51.759269 | orchestrator | Tuesday 04 February 2025 09:32:22 +0000 (0:00:01.183) 0:01:04.660 ****** 2025-02-04 09:33:51.759281 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759292 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759304 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759315 | orchestrator | 2025-02-04 09:33:51.759326 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-02-04 09:33:51.759337 | orchestrator | Tuesday 04 February 2025 09:32:23 +0000 (0:00:00.419) 0:01:05.080 ****** 2025-02-04 09:33:51.759349 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759360 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759371 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759382 | orchestrator | 2025-02-04 09:33:51.759394 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-02-04 09:33:51.759410 | orchestrator | Tuesday 04 February 2025 09:32:23 +0000 (0:00:00.754) 0:01:05.834 ****** 2025-02-04 09:33:51.759421 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759433 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759444 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759455 | orchestrator | 2025-02-04 09:33:51.759466 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-02-04 09:33:51.759478 | orchestrator | Tuesday 04 February 2025 09:32:24 +0000 (0:00:00.953) 0:01:06.788 ****** 2025-02-04 09:33:51.759489 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759501 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759512 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759523 | orchestrator | 2025-02-04 09:33:51.759535 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-02-04 09:33:51.759552 | orchestrator | Tuesday 04 February 2025 09:32:25 +0000 (0:00:00.835) 0:01:07.623 ****** 2025-02-04 09:33:51.759563 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759575 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759586 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759597 | orchestrator | 2025-02-04 09:33:51.759608 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-02-04 09:33:51.759620 | orchestrator | Tuesday 04 February 2025 09:32:26 +0000 (0:00:00.406) 0:01:08.030 ****** 2025-02-04 09:33:51.759631 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759642 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759653 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759665 | orchestrator | 2025-02-04 09:33:51.759676 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-02-04 09:33:51.759687 | orchestrator | Tuesday 04 February 2025 09:32:26 +0000 (0:00:00.565) 0:01:08.595 ****** 2025-02-04 09:33:51.759699 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759710 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759721 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759732 | orchestrator | 2025-02-04 09:33:51.759744 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-02-04 09:33:51.759755 | orchestrator | Tuesday 04 February 2025 09:32:27 +0000 (0:00:00.478) 0:01:09.074 ****** 2025-02-04 09:33:51.759766 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759778 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759789 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759800 | orchestrator | 2025-02-04 09:33:51.759811 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-02-04 09:33:51.759823 | orchestrator | Tuesday 04 February 2025 09:32:27 +0000 (0:00:00.300) 0:01:09.375 ****** 2025-02-04 09:33:51.759834 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759846 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759857 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759886 | orchestrator | 2025-02-04 09:33:51.759898 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-02-04 09:33:51.759910 | orchestrator | Tuesday 04 February 2025 09:32:27 +0000 (0:00:00.491) 0:01:09.867 ****** 2025-02-04 09:33:51.759921 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.759933 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.759955 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.759967 | orchestrator | 2025-02-04 09:33:51.759979 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-02-04 09:33:51.759991 | orchestrator | Tuesday 04 February 2025 09:32:29 +0000 (0:00:01.414) 0:01:11.282 ****** 2025-02-04 09:33:51.760003 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760014 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760031 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760043 | orchestrator | 2025-02-04 09:33:51.760054 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-02-04 09:33:51.760066 | orchestrator | Tuesday 04 February 2025 09:32:30 +0000 (0:00:01.562) 0:01:12.844 ****** 2025-02-04 09:33:51.760077 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760088 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760100 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760111 | orchestrator | 2025-02-04 09:33:51.760123 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-02-04 09:33:51.760135 | orchestrator | Tuesday 04 February 2025 09:32:31 +0000 (0:00:00.844) 0:01:13.689 ****** 2025-02-04 09:33:51.760146 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:33:51.760157 | orchestrator | 2025-02-04 09:33:51.760169 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-02-04 09:33:51.760180 | orchestrator | Tuesday 04 February 2025 09:32:34 +0000 (0:00:02.581) 0:01:16.270 ****** 2025-02-04 09:33:51.760197 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.760208 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.760220 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.760231 | orchestrator | 2025-02-04 09:33:51.760242 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-02-04 09:33:51.760254 | orchestrator | Tuesday 04 February 2025 09:32:35 +0000 (0:00:01.422) 0:01:17.693 ****** 2025-02-04 09:33:51.760265 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.760276 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.760287 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.760299 | orchestrator | 2025-02-04 09:33:51.760310 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-02-04 09:33:51.760321 | orchestrator | Tuesday 04 February 2025 09:32:36 +0000 (0:00:00.907) 0:01:18.601 ****** 2025-02-04 09:33:51.760333 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760344 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760356 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760367 | orchestrator | 2025-02-04 09:33:51.760378 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-02-04 09:33:51.760390 | orchestrator | Tuesday 04 February 2025 09:32:37 +0000 (0:00:00.935) 0:01:19.537 ****** 2025-02-04 09:33:51.760401 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760412 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760423 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760435 | orchestrator | 2025-02-04 09:33:51.760446 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-02-04 09:33:51.760458 | orchestrator | Tuesday 04 February 2025 09:32:38 +0000 (0:00:00.935) 0:01:20.472 ****** 2025-02-04 09:33:51.760469 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760481 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760492 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760503 | orchestrator | 2025-02-04 09:33:51.760519 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-02-04 09:33:51.760531 | orchestrator | Tuesday 04 February 2025 09:32:39 +0000 (0:00:01.188) 0:01:21.660 ****** 2025-02-04 09:33:51.760543 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760554 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760565 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760577 | orchestrator | 2025-02-04 09:33:51.760588 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-02-04 09:33:51.760599 | orchestrator | Tuesday 04 February 2025 09:32:40 +0000 (0:00:00.928) 0:01:22.589 ****** 2025-02-04 09:33:51.760611 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760622 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760633 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760645 | orchestrator | 2025-02-04 09:33:51.760656 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-02-04 09:33:51.760668 | orchestrator | Tuesday 04 February 2025 09:32:41 +0000 (0:00:00.919) 0:01:23.509 ****** 2025-02-04 09:33:51.760679 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.760690 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.760702 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.760713 | orchestrator | 2025-02-04 09:33:51.760724 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-04 09:33:51.760736 | orchestrator | Tuesday 04 February 2025 09:32:42 +0000 (0:00:00.561) 0:01:24.070 ****** 2025-02-04 09:33:51.760748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760914 | orchestrator | 2025-02-04 09:33:51.760926 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-04 09:33:51.760938 | orchestrator | Tuesday 04 February 2025 09:32:43 +0000 (0:00:01.554) 0:01:25.624 ****** 2025-02-04 09:33:51.760950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.760997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761067 | orchestrator | 2025-02-04 09:33:51.761078 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-04 09:33:51.761090 | orchestrator | Tuesday 04 February 2025 09:32:47 +0000 (0:00:04.120) 0:01:29.745 ****** 2025-02-04 09:33:51.761102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.761220 | orchestrator | 2025-02-04 09:33:51.761231 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-04 09:33:51.761241 | orchestrator | Tuesday 04 February 2025 09:32:50 +0000 (0:00:03.205) 0:01:32.950 ****** 2025-02-04 09:33:51.761251 | orchestrator | 2025-02-04 09:33:51.761262 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-04 09:33:51.761272 | orchestrator | Tuesday 04 February 2025 09:32:51 +0000 (0:00:00.083) 0:01:33.034 ****** 2025-02-04 09:33:51.761283 | orchestrator | 2025-02-04 09:33:51.761293 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-04 09:33:51.761308 | orchestrator | Tuesday 04 February 2025 09:32:51 +0000 (0:00:00.107) 0:01:33.141 ****** 2025-02-04 09:33:51.761319 | orchestrator | 2025-02-04 09:33:51.761329 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-04 09:33:51.761339 | orchestrator | Tuesday 04 February 2025 09:32:51 +0000 (0:00:00.280) 0:01:33.422 ****** 2025-02-04 09:33:51.761350 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.761360 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.761370 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.761380 | orchestrator | 2025-02-04 09:33:51.761391 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-04 09:33:51.761401 | orchestrator | Tuesday 04 February 2025 09:32:59 +0000 (0:00:07.812) 0:01:41.234 ****** 2025-02-04 09:33:51.761411 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.761422 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.761432 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.761442 | orchestrator | 2025-02-04 09:33:51.761452 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-04 09:33:51.761463 | orchestrator | Tuesday 04 February 2025 09:33:02 +0000 (0:00:03.090) 0:01:44.325 ****** 2025-02-04 09:33:51.761473 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.761483 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.761494 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.761504 | orchestrator | 2025-02-04 09:33:51.761514 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-04 09:33:51.761524 | orchestrator | Tuesday 04 February 2025 09:33:05 +0000 (0:00:03.363) 0:01:47.688 ****** 2025-02-04 09:33:51.761535 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.761545 | orchestrator | 2025-02-04 09:33:51.761555 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-04 09:33:51.761566 | orchestrator | Tuesday 04 February 2025 09:33:05 +0000 (0:00:00.113) 0:01:47.802 ****** 2025-02-04 09:33:51.761576 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.761586 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.761597 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.761607 | orchestrator | 2025-02-04 09:33:51.761617 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-04 09:33:51.761628 | orchestrator | Tuesday 04 February 2025 09:33:06 +0000 (0:00:00.858) 0:01:48.660 ****** 2025-02-04 09:33:51.761638 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.761648 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.761659 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.761669 | orchestrator | 2025-02-04 09:33:51.761679 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-04 09:33:51.761690 | orchestrator | Tuesday 04 February 2025 09:33:07 +0000 (0:00:00.665) 0:01:49.325 ****** 2025-02-04 09:33:51.761700 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.761710 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.761729 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.761744 | orchestrator | 2025-02-04 09:33:51.761755 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-04 09:33:51.761765 | orchestrator | Tuesday 04 February 2025 09:33:08 +0000 (0:00:00.695) 0:01:50.021 ****** 2025-02-04 09:33:51.761775 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.761786 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.761796 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.761806 | orchestrator | 2025-02-04 09:33:51.761816 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-04 09:33:51.761827 | orchestrator | Tuesday 04 February 2025 09:33:08 +0000 (0:00:00.532) 0:01:50.553 ****** 2025-02-04 09:33:51.761837 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.761847 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.761858 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.761882 | orchestrator | 2025-02-04 09:33:51.761894 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-04 09:33:51.761909 | orchestrator | Tuesday 04 February 2025 09:33:09 +0000 (0:00:01.102) 0:01:51.655 ****** 2025-02-04 09:33:51.761919 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.761930 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.761940 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.761950 | orchestrator | 2025-02-04 09:33:51.761961 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-02-04 09:33:51.761971 | orchestrator | Tuesday 04 February 2025 09:33:10 +0000 (0:00:00.808) 0:01:52.464 ****** 2025-02-04 09:33:51.761981 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.761992 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.762002 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.762012 | orchestrator | 2025-02-04 09:33:51.762078 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-02-04 09:33:51.762089 | orchestrator | Tuesday 04 February 2025 09:33:10 +0000 (0:00:00.388) 0:01:52.853 ****** 2025-02-04 09:33:51.762100 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762111 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762122 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762133 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762144 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762154 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762165 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762181 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762198 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762209 | orchestrator | 2025-02-04 09:33:51.762219 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-02-04 09:33:51.762230 | orchestrator | Tuesday 04 February 2025 09:33:12 +0000 (0:00:01.849) 0:01:54.702 ****** 2025-02-04 09:33:51.762240 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762261 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762271 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762302 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762359 | orchestrator | 2025-02-04 09:33:51.762369 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-02-04 09:33:51.762380 | orchestrator | Tuesday 04 February 2025 09:33:18 +0000 (0:00:05.483) 0:02:00.186 ****** 2025-02-04 09:33:51.762390 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762401 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762411 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-northd:2024.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762422 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762436 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762446 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-nb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762457 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762482 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ovn-sb-db-server:2024.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-02-04 09:33:51.762493 | orchestrator | 2025-02-04 09:33:51.762507 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-04 09:33:51.762518 | orchestrator | Tuesday 04 February 2025 09:33:21 +0000 (0:00:03.605) 0:02:03.791 ****** 2025-02-04 09:33:51.762528 | orchestrator | 2025-02-04 09:33:51.762539 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-04 09:33:51.762549 | orchestrator | Tuesday 04 February 2025 09:33:22 +0000 (0:00:00.273) 0:02:04.065 ****** 2025-02-04 09:33:51.762559 | orchestrator | 2025-02-04 09:33:51.762569 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-02-04 09:33:51.762580 | orchestrator | Tuesday 04 February 2025 09:33:22 +0000 (0:00:00.067) 0:02:04.132 ****** 2025-02-04 09:33:51.762590 | orchestrator | 2025-02-04 09:33:51.762600 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-02-04 09:33:51.762610 | orchestrator | Tuesday 04 February 2025 09:33:22 +0000 (0:00:00.061) 0:02:04.195 ****** 2025-02-04 09:33:51.762620 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.762631 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.762641 | orchestrator | 2025-02-04 09:33:51.762651 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-02-04 09:33:51.762661 | orchestrator | Tuesday 04 February 2025 09:33:29 +0000 (0:00:06.999) 0:02:11.194 ****** 2025-02-04 09:33:51.762672 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.762682 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.762692 | orchestrator | 2025-02-04 09:33:51.762702 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-02-04 09:33:51.762713 | orchestrator | Tuesday 04 February 2025 09:33:36 +0000 (0:00:06.917) 0:02:18.111 ****** 2025-02-04 09:33:51.762723 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:33:51.762733 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:33:51.762743 | orchestrator | 2025-02-04 09:33:51.762753 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-02-04 09:33:51.762763 | orchestrator | Tuesday 04 February 2025 09:33:42 +0000 (0:00:06.715) 0:02:24.826 ****** 2025-02-04 09:33:51.762773 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:33:51.762784 | orchestrator | 2025-02-04 09:33:51.762794 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-02-04 09:33:51.762804 | orchestrator | Tuesday 04 February 2025 09:33:43 +0000 (0:00:00.263) 0:02:25.089 ****** 2025-02-04 09:33:51.762814 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.762824 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.762835 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.762845 | orchestrator | 2025-02-04 09:33:51.762855 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-02-04 09:33:51.762865 | orchestrator | Tuesday 04 February 2025 09:33:44 +0000 (0:00:01.015) 0:02:26.104 ****** 2025-02-04 09:33:51.762889 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.762899 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.762909 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.762919 | orchestrator | 2025-02-04 09:33:51.762934 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-02-04 09:33:51.762944 | orchestrator | Tuesday 04 February 2025 09:33:44 +0000 (0:00:00.691) 0:02:26.796 ****** 2025-02-04 09:33:51.762954 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.762970 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.762980 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.762990 | orchestrator | 2025-02-04 09:33:51.763000 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-02-04 09:33:51.763011 | orchestrator | Tuesday 04 February 2025 09:33:45 +0000 (0:00:01.199) 0:02:27.996 ****** 2025-02-04 09:33:51.763021 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:33:51.763031 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:33:51.763041 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:33:51.763051 | orchestrator | 2025-02-04 09:33:51.763062 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-02-04 09:33:51.763072 | orchestrator | Tuesday 04 February 2025 09:33:46 +0000 (0:00:00.680) 0:02:28.676 ****** 2025-02-04 09:33:51.763082 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.763093 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.763103 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.763113 | orchestrator | 2025-02-04 09:33:51.763123 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-02-04 09:33:51.763133 | orchestrator | Tuesday 04 February 2025 09:33:47 +0000 (0:00:00.864) 0:02:29.540 ****** 2025-02-04 09:33:51.763144 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:33:51.763154 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:33:51.763164 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:33:51.763174 | orchestrator | 2025-02-04 09:33:51.763184 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:33:51.763195 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-02-04 09:33:51.763205 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-04 09:33:51.763216 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-02-04 09:33:51.763226 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:33:51.763236 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:33:51.763247 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:33:51.763257 | orchestrator | 2025-02-04 09:33:51.763267 | orchestrator | 2025-02-04 09:33:51.763277 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:33:51.763292 | orchestrator | Tuesday 04 February 2025 09:33:48 +0000 (0:00:01.428) 0:02:30.969 ****** 2025-02-04 09:33:54.820622 | orchestrator | =============================================================================== 2025-02-04 09:33:54.820715 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.00s 2025-02-04 09:33:54.820726 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 15.15s 2025-02-04 09:33:54.820735 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.81s 2025-02-04 09:33:54.820744 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 10.08s 2025-02-04 09:33:54.820753 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 10.01s 2025-02-04 09:33:54.820761 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.48s 2025-02-04 09:33:54.820769 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.12s 2025-02-04 09:33:54.820778 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.64s 2025-02-04 09:33:54.820786 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.61s 2025-02-04 09:33:54.820794 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.21s 2025-02-04 09:33:54.820825 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.58s 2025-02-04 09:33:54.820834 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.21s 2025-02-04 09:33:54.820843 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.16s 2025-02-04 09:33:54.820851 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.10s 2025-02-04 09:33:54.820859 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.07s 2025-02-04 09:33:54.820921 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.00s 2025-02-04 09:33:54.820931 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.85s 2025-02-04 09:33:54.820939 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.82s 2025-02-04 09:33:54.820948 | orchestrator | ovn-db : Establish whether the OVN NB cluster has already existed ------- 1.78s 2025-02-04 09:33:54.820956 | orchestrator | ovn-db : Divide hosts by their OVN SB leader/follower role -------------- 1.56s 2025-02-04 09:33:54.820993 | orchestrator | 2025-02-04 09:33:54 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:54.822376 | orchestrator | 2025-02-04 09:33:54 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:33:57.870843 | orchestrator | 2025-02-04 09:33:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:33:57.871044 | orchestrator | 2025-02-04 09:33:57 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:33:57.872063 | orchestrator | 2025-02-04 09:33:57 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:00.917259 | orchestrator | 2025-02-04 09:33:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:00.917407 | orchestrator | 2025-02-04 09:34:00 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:03.957513 | orchestrator | 2025-02-04 09:34:00 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:03.957643 | orchestrator | 2025-02-04 09:34:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:03.957681 | orchestrator | 2025-02-04 09:34:03 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:06.992862 | orchestrator | 2025-02-04 09:34:03 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:06.993097 | orchestrator | 2025-02-04 09:34:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:06.993362 | orchestrator | 2025-02-04 09:34:06 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:10.031451 | orchestrator | 2025-02-04 09:34:06 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:10.031581 | orchestrator | 2025-02-04 09:34:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:10.031621 | orchestrator | 2025-02-04 09:34:10 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:13.073717 | orchestrator | 2025-02-04 09:34:10 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:13.073957 | orchestrator | 2025-02-04 09:34:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:13.073999 | orchestrator | 2025-02-04 09:34:13 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:16.101261 | orchestrator | 2025-02-04 09:34:13 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:16.101401 | orchestrator | 2025-02-04 09:34:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:16.101492 | orchestrator | 2025-02-04 09:34:16 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:19.145521 | orchestrator | 2025-02-04 09:34:16 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:19.145768 | orchestrator | 2025-02-04 09:34:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:19.145816 | orchestrator | 2025-02-04 09:34:19 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:22.197510 | orchestrator | 2025-02-04 09:34:19 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:22.197637 | orchestrator | 2025-02-04 09:34:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:22.197699 | orchestrator | 2025-02-04 09:34:22 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:22.198267 | orchestrator | 2025-02-04 09:34:22 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:22.198307 | orchestrator | 2025-02-04 09:34:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:25.244534 | orchestrator | 2025-02-04 09:34:25 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:25.247824 | orchestrator | 2025-02-04 09:34:25 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:28.299279 | orchestrator | 2025-02-04 09:34:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:28.299415 | orchestrator | 2025-02-04 09:34:28 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:28.302764 | orchestrator | 2025-02-04 09:34:28 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:28.303165 | orchestrator | 2025-02-04 09:34:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:31.357572 | orchestrator | 2025-02-04 09:34:31 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:34.411238 | orchestrator | 2025-02-04 09:34:31 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:34.411369 | orchestrator | 2025-02-04 09:34:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:34.411408 | orchestrator | 2025-02-04 09:34:34 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:37.465960 | orchestrator | 2025-02-04 09:34:34 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:37.466138 | orchestrator | 2025-02-04 09:34:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:37.466181 | orchestrator | 2025-02-04 09:34:37 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:37.467427 | orchestrator | 2025-02-04 09:34:37 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:40.528307 | orchestrator | 2025-02-04 09:34:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:40.528459 | orchestrator | 2025-02-04 09:34:40 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:43.587279 | orchestrator | 2025-02-04 09:34:40 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:43.587409 | orchestrator | 2025-02-04 09:34:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:43.587447 | orchestrator | 2025-02-04 09:34:43 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:46.653588 | orchestrator | 2025-02-04 09:34:43 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:46.653748 | orchestrator | 2025-02-04 09:34:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:46.653786 | orchestrator | 2025-02-04 09:34:46 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:46.655845 | orchestrator | 2025-02-04 09:34:46 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:46.657516 | orchestrator | 2025-02-04 09:34:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:49.711076 | orchestrator | 2025-02-04 09:34:49 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:52.756761 | orchestrator | 2025-02-04 09:34:49 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:52.757011 | orchestrator | 2025-02-04 09:34:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:52.757053 | orchestrator | 2025-02-04 09:34:52 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:55.801694 | orchestrator | 2025-02-04 09:34:52 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:55.801821 | orchestrator | 2025-02-04 09:34:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:55.801862 | orchestrator | 2025-02-04 09:34:55 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:34:58.840008 | orchestrator | 2025-02-04 09:34:55 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:34:58.840127 | orchestrator | 2025-02-04 09:34:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:34:58.840417 | orchestrator | 2025-02-04 09:34:58 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:01.882630 | orchestrator | 2025-02-04 09:34:58 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:01.882802 | orchestrator | 2025-02-04 09:34:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:01.882857 | orchestrator | 2025-02-04 09:35:01 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:01.883484 | orchestrator | 2025-02-04 09:35:01 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:04.932645 | orchestrator | 2025-02-04 09:35:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:04.932823 | orchestrator | 2025-02-04 09:35:04 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:04.934833 | orchestrator | 2025-02-04 09:35:04 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:07.982212 | orchestrator | 2025-02-04 09:35:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:07.982364 | orchestrator | 2025-02-04 09:35:07 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:07.982802 | orchestrator | 2025-02-04 09:35:07 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:07.983181 | orchestrator | 2025-02-04 09:35:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:11.039275 | orchestrator | 2025-02-04 09:35:11 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:11.041437 | orchestrator | 2025-02-04 09:35:11 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:11.041980 | orchestrator | 2025-02-04 09:35:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:14.096539 | orchestrator | 2025-02-04 09:35:14 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:14.097253 | orchestrator | 2025-02-04 09:35:14 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:17.140190 | orchestrator | 2025-02-04 09:35:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:17.140330 | orchestrator | 2025-02-04 09:35:17 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:17.143534 | orchestrator | 2025-02-04 09:35:17 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:20.196272 | orchestrator | 2025-02-04 09:35:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:20.196452 | orchestrator | 2025-02-04 09:35:20 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:20.197025 | orchestrator | 2025-02-04 09:35:20 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:20.197082 | orchestrator | 2025-02-04 09:35:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:23.241008 | orchestrator | 2025-02-04 09:35:23 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:26.279697 | orchestrator | 2025-02-04 09:35:23 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:26.279859 | orchestrator | 2025-02-04 09:35:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:26.279903 | orchestrator | 2025-02-04 09:35:26 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:26.280299 | orchestrator | 2025-02-04 09:35:26 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:26.280860 | orchestrator | 2025-02-04 09:35:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:29.330207 | orchestrator | 2025-02-04 09:35:29 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:29.331693 | orchestrator | 2025-02-04 09:35:29 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:32.371353 | orchestrator | 2025-02-04 09:35:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:32.371487 | orchestrator | 2025-02-04 09:35:32 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:35.405337 | orchestrator | 2025-02-04 09:35:32 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:35.405417 | orchestrator | 2025-02-04 09:35:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:35.405436 | orchestrator | 2025-02-04 09:35:35 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:38.452654 | orchestrator | 2025-02-04 09:35:35 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:38.452787 | orchestrator | 2025-02-04 09:35:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:38.452824 | orchestrator | 2025-02-04 09:35:38 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:41.502661 | orchestrator | 2025-02-04 09:35:38 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:41.502790 | orchestrator | 2025-02-04 09:35:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:41.502827 | orchestrator | 2025-02-04 09:35:41 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:44.544468 | orchestrator | 2025-02-04 09:35:41 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:44.544703 | orchestrator | 2025-02-04 09:35:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:44.544775 | orchestrator | 2025-02-04 09:35:44 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:47.588269 | orchestrator | 2025-02-04 09:35:44 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:47.588429 | orchestrator | 2025-02-04 09:35:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:47.588481 | orchestrator | 2025-02-04 09:35:47 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:47.589491 | orchestrator | 2025-02-04 09:35:47 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:50.636406 | orchestrator | 2025-02-04 09:35:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:50.636552 | orchestrator | 2025-02-04 09:35:50 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:50.636804 | orchestrator | 2025-02-04 09:35:50 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:53.682012 | orchestrator | 2025-02-04 09:35:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:53.682281 | orchestrator | 2025-02-04 09:35:53 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:56.725242 | orchestrator | 2025-02-04 09:35:53 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:56.725391 | orchestrator | 2025-02-04 09:35:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:56.725443 | orchestrator | 2025-02-04 09:35:56 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:35:56.725781 | orchestrator | 2025-02-04 09:35:56 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:35:59.762948 | orchestrator | 2025-02-04 09:35:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:35:59.763085 | orchestrator | 2025-02-04 09:35:59 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:02.813796 | orchestrator | 2025-02-04 09:35:59 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:02.813965 | orchestrator | 2025-02-04 09:35:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:02.814000 | orchestrator | 2025-02-04 09:36:02 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:02.815247 | orchestrator | 2025-02-04 09:36:02 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:05.850316 | orchestrator | 2025-02-04 09:36:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:05.850476 | orchestrator | 2025-02-04 09:36:05 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:05.851957 | orchestrator | 2025-02-04 09:36:05 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:08.895711 | orchestrator | 2025-02-04 09:36:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:08.895903 | orchestrator | 2025-02-04 09:36:08 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:08.896064 | orchestrator | 2025-02-04 09:36:08 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:11.939162 | orchestrator | 2025-02-04 09:36:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:11.939305 | orchestrator | 2025-02-04 09:36:11 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:11.940054 | orchestrator | 2025-02-04 09:36:11 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:14.989956 | orchestrator | 2025-02-04 09:36:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:14.990206 | orchestrator | 2025-02-04 09:36:14 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:14.991073 | orchestrator | 2025-02-04 09:36:14 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:14.991626 | orchestrator | 2025-02-04 09:36:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:18.046481 | orchestrator | 2025-02-04 09:36:18 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:18.047218 | orchestrator | 2025-02-04 09:36:18 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state STARTED 2025-02-04 09:36:18.047756 | orchestrator | 2025-02-04 09:36:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:21.093387 | orchestrator | 2025-02-04 09:36:21 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:21.093853 | orchestrator | 2025-02-04 09:36:21 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:21.093888 | orchestrator | 2025-02-04 09:36:21 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:21.100445 | orchestrator | 2025-02-04 09:36:21.100534 | orchestrator | 2025-02-04 09:36:21.100554 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:36:21.102241 | orchestrator | 2025-02-04 09:36:21.102317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:36:21.102337 | orchestrator | Tuesday 04 February 2025 09:29:35 +0000 (0:00:00.275) 0:00:00.275 ****** 2025-02-04 09:36:21.102351 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:36:21.102365 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:36:21.102379 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:36:21.102392 | orchestrator | 2025-02-04 09:36:21.102405 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:36:21.102419 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:01.141) 0:00:01.416 ****** 2025-02-04 09:36:21.102433 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-02-04 09:36:21.102446 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-02-04 09:36:21.102459 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-02-04 09:36:21.102472 | orchestrator | 2025-02-04 09:36:21.102485 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-02-04 09:36:21.102497 | orchestrator | 2025-02-04 09:36:21.102510 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-04 09:36:21.102523 | orchestrator | Tuesday 04 February 2025 09:29:37 +0000 (0:00:00.708) 0:00:02.125 ****** 2025-02-04 09:36:21.102536 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.102549 | orchestrator | 2025-02-04 09:36:21.102562 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-02-04 09:36:21.102575 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:00.797) 0:00:02.923 ****** 2025-02-04 09:36:21.102588 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:36:21.102601 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:36:21.102614 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:36:21.102627 | orchestrator | 2025-02-04 09:36:21.102640 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-02-04 09:36:21.102653 | orchestrator | Tuesday 04 February 2025 09:29:40 +0000 (0:00:02.212) 0:00:05.135 ****** 2025-02-04 09:36:21.102666 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.102691 | orchestrator | 2025-02-04 09:36:21.102705 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-02-04 09:36:21.102717 | orchestrator | Tuesday 04 February 2025 09:29:42 +0000 (0:00:01.610) 0:00:06.746 ****** 2025-02-04 09:36:21.102762 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:36:21.102798 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:36:21.102812 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:36:21.102825 | orchestrator | 2025-02-04 09:36:21.102837 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-02-04 09:36:21.102850 | orchestrator | Tuesday 04 February 2025 09:29:43 +0000 (0:00:01.217) 0:00:07.964 ****** 2025-02-04 09:36:21.102863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-04 09:36:21.102876 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-04 09:36:21.102889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-04 09:36:21.102902 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-04 09:36:21.102917 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-04 09:36:21.102930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-04 09:36:21.102943 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-02-04 09:36:21.102956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-02-04 09:36:21.102968 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-04 09:36:21.102981 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-02-04 09:36:21.102994 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-04 09:36:21.103006 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-02-04 09:36:21.103019 | orchestrator | 2025-02-04 09:36:21.103031 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-02-04 09:36:21.103044 | orchestrator | Tuesday 04 February 2025 09:29:49 +0000 (0:00:06.263) 0:00:14.228 ****** 2025-02-04 09:36:21.103057 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-04 09:36:21.103069 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-04 09:36:21.103082 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-04 09:36:21.103095 | orchestrator | 2025-02-04 09:36:21.103107 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-02-04 09:36:21.103120 | orchestrator | Tuesday 04 February 2025 09:29:51 +0000 (0:00:02.141) 0:00:16.369 ****** 2025-02-04 09:36:21.103132 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-02-04 09:36:21.103145 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-02-04 09:36:21.103158 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-02-04 09:36:21.103171 | orchestrator | 2025-02-04 09:36:21.103183 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-02-04 09:36:21.103196 | orchestrator | Tuesday 04 February 2025 09:29:55 +0000 (0:00:03.653) 0:00:20.023 ****** 2025-02-04 09:36:21.103209 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-02-04 09:36:21.103222 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.103249 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-02-04 09:36:21.103263 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.103275 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-02-04 09:36:21.103288 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.103301 | orchestrator | 2025-02-04 09:36:21.103314 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-02-04 09:36:21.103326 | orchestrator | Tuesday 04 February 2025 09:29:57 +0000 (0:00:01.726) 0:00:21.750 ****** 2025-02-04 09:36:21.103343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.103371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.103385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.103398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.103412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.103446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.103462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.103483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.103497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.103512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.103525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.103538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.103551 | orchestrator | 2025-02-04 09:36:21.103564 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-02-04 09:36:21.103577 | orchestrator | Tuesday 04 February 2025 09:30:00 +0000 (0:00:03.207) 0:00:24.957 ****** 2025-02-04 09:36:21.103590 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:36:21.103603 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:36:21.103629 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:36:21.103642 | orchestrator | 2025-02-04 09:36:21.103655 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-02-04 09:36:21.103674 | orchestrator | Tuesday 04 February 2025 09:30:03 +0000 (0:00:02.440) 0:00:27.397 ****** 2025-02-04 09:36:21.103692 | orchestrator | skipping: [testbed-node-0] => (item=users)  2025-02-04 09:36:21.103705 | orchestrator | skipping: [testbed-node-1] => (item=users)  2025-02-04 09:36:21.103722 | orchestrator | skipping: [testbed-node-2] => (item=users)  2025-02-04 09:36:21.103742 | orchestrator | skipping: [testbed-node-0] => (item=rules)  2025-02-04 09:36:21.103763 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.103809 | orchestrator | skipping: [testbed-node-1] => (item=rules)  2025-02-04 09:36:21.103823 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.103836 | orchestrator | skipping: [testbed-node-2] => (item=rules)  2025-02-04 09:36:21.103848 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.103860 | orchestrator | 2025-02-04 09:36:21.103873 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-02-04 09:36:21.103885 | orchestrator | Tuesday 04 February 2025 09:30:06 +0000 (0:00:03.386) 0:00:30.784 ****** 2025-02-04 09:36:21.103898 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:36:21.103925 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:36:21.103938 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:36:21.103959 | orchestrator | 2025-02-04 09:36:21.103972 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-02-04 09:36:21.103990 | orchestrator | Tuesday 04 February 2025 09:30:08 +0000 (0:00:01.604) 0:00:32.388 ****** 2025-02-04 09:36:21.104002 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.104015 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.104028 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.104040 | orchestrator | 2025-02-04 09:36:21.104053 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-02-04 09:36:21.104065 | orchestrator | Tuesday 04 February 2025 09:30:09 +0000 (0:00:01.772) 0:00:34.161 ****** 2025-02-04 09:36:21.104079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.104093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.104107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.104120 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-04 09:36:21.104568 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-04 09:36:21.104603 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-02-04 09:36:21.104618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.104632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.104645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.104658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.104689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106234 | orchestrator | 2025-02-04 09:36:21.106250 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-02-04 09:36:21.106261 | orchestrator | Tuesday 04 February 2025 09:30:14 +0000 (0:00:04.282) 0:00:38.444 ****** 2025-02-04 09:36:21.106273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.106314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.106326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.106353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.106396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.106419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.106441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.106453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.106465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.106494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106635 | orchestrator | 2025-02-04 09:36:21.106647 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-02-04 09:36:21.106658 | orchestrator | Tuesday 04 February 2025 09:30:18 +0000 (0:00:04.506) 0:00:42.951 ****** 2025-02-04 09:36:21.106677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.106694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.106705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.106716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.106743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.106754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.106790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/2025-02-04 09:36:21 | INFO  | Task 884b52a5-fd18-45af-902d-ce9b589cbc22 is in state SUCCESS 2025-02-04 09:36:21.106803 | orchestrator | lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.106814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.106837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.106865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.106876 | orchestrator | 2025-02-04 09:36:21.106886 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-02-04 09:36:21.106897 | orchestrator | Tuesday 04 February 2025 09:30:21 +0000 (0:00:02.879) 0:00:45.830 ****** 2025-02-04 09:36:21.106907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-04 09:36:21.106918 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-04 09:36:21.106929 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-02-04 09:36:21.106939 | orchestrator | 2025-02-04 09:36:21.106949 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-02-04 09:36:21.106959 | orchestrator | Tuesday 04 February 2025 09:30:24 +0000 (0:00:02.914) 0:00:48.745 ****** 2025-02-04 09:36:21.106970 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-04 09:36:21.106980 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.106997 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-04 09:36:21.107008 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.107019 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2)  2025-02-04 09:36:21.107029 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.107040 | orchestrator | 2025-02-04 09:36:21.107050 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-02-04 09:36:21.107061 | orchestrator | Tuesday 04 February 2025 09:30:27 +0000 (0:00:03.283) 0:00:52.030 ****** 2025-02-04 09:36:21.107071 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.107094 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.107105 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.107115 | orchestrator | 2025-02-04 09:36:21.107125 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-02-04 09:36:21.107135 | orchestrator | Tuesday 04 February 2025 09:30:31 +0000 (0:00:03.872) 0:00:55.902 ****** 2025-02-04 09:36:21.107172 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-04 09:36:21.107184 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-04 09:36:21.107194 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-02-04 09:36:21.107228 | orchestrator | 2025-02-04 09:36:21.107239 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-02-04 09:36:21.107255 | orchestrator | Tuesday 04 February 2025 09:30:38 +0000 (0:00:07.170) 0:01:03.073 ****** 2025-02-04 09:36:21.107265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-04 09:36:21.107275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-04 09:36:21.107286 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-02-04 09:36:21.107296 | orchestrator | 2025-02-04 09:36:21.107306 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-02-04 09:36:21.107316 | orchestrator | Tuesday 04 February 2025 09:30:42 +0000 (0:00:03.734) 0:01:06.807 ****** 2025-02-04 09:36:21.107327 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-02-04 09:36:21.107337 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-02-04 09:36:21.107368 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-02-04 09:36:21.107380 | orchestrator | 2025-02-04 09:36:21.107390 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-02-04 09:36:21.107400 | orchestrator | Tuesday 04 February 2025 09:30:47 +0000 (0:00:05.380) 0:01:12.188 ****** 2025-02-04 09:36:21.107411 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-02-04 09:36:21.107421 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-02-04 09:36:21.107431 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-02-04 09:36:21.107442 | orchestrator | 2025-02-04 09:36:21.107452 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-02-04 09:36:21.107463 | orchestrator | Tuesday 04 February 2025 09:30:50 +0000 (0:00:02.972) 0:01:15.160 ****** 2025-02-04 09:36:21.107473 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.107484 | orchestrator | 2025-02-04 09:36:21.107494 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-02-04 09:36:21.107504 | orchestrator | Tuesday 04 February 2025 09:30:51 +0000 (0:00:01.109) 0:01:16.269 ****** 2025-02-04 09:36:21.107519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.107531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.107549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.107565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.107576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.107587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.107598 | orchestrator | 2025-02-04 09:36:21.107608 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-02-04 09:36:21.107619 | orchestrator | Tuesday 04 February 2025 09:30:54 +0000 (0:00:02.801) 0:01:19.070 ****** 2025-02-04 09:36:21.107629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.107640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.107651 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.107661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.107681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.107698 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.107708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.107719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.107730 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.107741 | orchestrator | 2025-02-04 09:36:21.107760 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-02-04 09:36:21.107831 | orchestrator | Tuesday 04 February 2025 09:30:55 +0000 (0:00:01.125) 0:01:20.196 ****** 2025-02-04 09:36:21.107856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.107874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.107890 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.107906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.107922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.107948 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.107972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-02-04 09:36:21.107988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.107998 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.108007 | orchestrator | 2025-02-04 09:36:21.108016 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-02-04 09:36:21.108024 | orchestrator | Tuesday 04 February 2025 09:30:57 +0000 (0:00:01.339) 0:01:21.536 ****** 2025-02-04 09:36:21.108033 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-04 09:36:21.108042 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-04 09:36:21.108051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-02-04 09:36:21.108060 | orchestrator | 2025-02-04 09:36:21.108073 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-02-04 09:36:21.108082 | orchestrator | Tuesday 04 February 2025 09:30:59 +0000 (0:00:02.431) 0:01:23.967 ****** 2025-02-04 09:36:21.108091 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-04 09:36:21.108099 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.108111 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-04 09:36:21.108120 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.108129 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2)  2025-02-04 09:36:21.108137 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.108146 | orchestrator | 2025-02-04 09:36:21.108155 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-02-04 09:36:21.108163 | orchestrator | Tuesday 04 February 2025 09:31:01 +0000 (0:00:01.497) 0:01:25.465 ****** 2025-02-04 09:36:21.108172 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-04 09:36:21.108184 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-04 09:36:21.108193 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-02-04 09:36:21.108202 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-04 09:36:21.108211 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.108219 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-04 09:36:21.108233 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.108242 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-02-04 09:36:21.108250 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.108259 | orchestrator | 2025-02-04 09:36:21.108268 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-02-04 09:36:21.108276 | orchestrator | Tuesday 04 February 2025 09:31:05 +0000 (0:00:04.823) 0:01:30.288 ****** 2025-02-04 09:36:21.108285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.108299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.108316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.108325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.108334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-02-04 09:36:21.108344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/proxysql:2024.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-02-04 09:36:21.108357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.108371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.108381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.108390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.108399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keepalived:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', ''], 'dimensions': {}}}) 2025-02-04 09:36:21.108412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/haproxy-ssh:2024.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4', '__omit_place_holder__e4f7be7f8a3810e8af710aa11d61e188423ebee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-02-04 09:36:21.108426 | orchestrator | 2025-02-04 09:36:21.108435 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-02-04 09:36:21.108443 | orchestrator | Tuesday 04 February 2025 09:31:08 +0000 (0:00:03.023) 0:01:33.312 ****** 2025-02-04 09:36:21.108452 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.108461 | orchestrator | 2025-02-04 09:36:21.108470 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-02-04 09:36:21.108478 | orchestrator | Tuesday 04 February 2025 09:31:09 +0000 (0:00:00.940) 0:01:34.253 ****** 2025-02-04 09:36:21.108488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-04 09:36:21.108503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.108514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-04 09:36:21.108545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.108558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-02-04 09:36:21.108581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.108599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108630 | orchestrator | 2025-02-04 09:36:21.108639 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-02-04 09:36:21.108648 | orchestrator | Tuesday 04 February 2025 09:31:17 +0000 (0:00:07.263) 0:01:41.517 ****** 2025-02-04 09:36:21.108660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-04 09:36:21.108673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.108682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108700 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.108709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-04 09:36:21.108723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.108732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108750 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.108764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-api:2024.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-02-04 09:36:21.108796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-evaluator:2024.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.108806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-listener:2024.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/aodh-notifier:2024.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.108829 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.108838 | orchestrator | 2025-02-04 09:36:21.108847 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-02-04 09:36:21.108855 | orchestrator | Tuesday 04 February 2025 09:31:19 +0000 (0:00:02.444) 0:01:43.961 ****** 2025-02-04 09:36:21.108864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-04 09:36:21.108874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-04 09:36:21.108884 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.108893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-04 09:36:21.108902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-04 09:36:21.108911 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.108920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-02-04 09:36:21.108928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-02-04 09:36:21.108937 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.108946 | orchestrator | 2025-02-04 09:36:21.108955 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-02-04 09:36:21.108963 | orchestrator | Tuesday 04 February 2025 09:31:21 +0000 (0:00:02.097) 0:01:46.059 ****** 2025-02-04 09:36:21.108972 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.108984 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.108993 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109002 | orchestrator | 2025-02-04 09:36:21.109011 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-02-04 09:36:21.109019 | orchestrator | Tuesday 04 February 2025 09:31:22 +0000 (0:00:00.445) 0:01:46.504 ****** 2025-02-04 09:36:21.109028 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109037 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109045 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109054 | orchestrator | 2025-02-04 09:36:21.109063 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-02-04 09:36:21.109071 | orchestrator | Tuesday 04 February 2025 09:31:23 +0000 (0:00:01.503) 0:01:48.008 ****** 2025-02-04 09:36:21.109080 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.109093 | orchestrator | 2025-02-04 09:36:21.109102 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-02-04 09:36:21.109110 | orchestrator | Tuesday 04 February 2025 09:31:24 +0000 (0:00:00.750) 0:01:48.758 ****** 2025-02-04 09:36:21.109137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.109149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.109181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.109220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109238 | orchestrator | 2025-02-04 09:36:21.109247 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-02-04 09:36:21.109256 | orchestrator | Tuesday 04 February 2025 09:31:30 +0000 (0:00:06.309) 0:01:55.068 ****** 2025-02-04 09:36:21.109269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.109291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109310 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.109328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109355 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-api:2024.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.109380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-keystone-listener:2024.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/barbican-worker:2024.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.109398 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109407 | orchestrator | 2025-02-04 09:36:21.109416 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-02-04 09:36:21.109425 | orchestrator | Tuesday 04 February 2025 09:31:32 +0000 (0:00:01.434) 0:01:56.503 ****** 2025-02-04 09:36:21.109434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-04 09:36:21.109443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-04 09:36:21.109453 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-04 09:36:21.109475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-04 09:36:21.109484 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-04 09:36:21.109507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-02-04 09:36:21.109529 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109538 | orchestrator | 2025-02-04 09:36:21.109547 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-02-04 09:36:21.109556 | orchestrator | Tuesday 04 February 2025 09:31:33 +0000 (0:00:01.505) 0:01:58.008 ****** 2025-02-04 09:36:21.109564 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109573 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109582 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109591 | orchestrator | 2025-02-04 09:36:21.109599 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-02-04 09:36:21.109608 | orchestrator | Tuesday 04 February 2025 09:31:34 +0000 (0:00:00.549) 0:01:58.557 ****** 2025-02-04 09:36:21.109617 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109625 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109638 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109647 | orchestrator | 2025-02-04 09:36:21.109656 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-02-04 09:36:21.109668 | orchestrator | Tuesday 04 February 2025 09:31:35 +0000 (0:00:01.771) 0:02:00.329 ****** 2025-02-04 09:36:21.109677 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109685 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109694 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109703 | orchestrator | 2025-02-04 09:36:21.109712 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-02-04 09:36:21.109721 | orchestrator | Tuesday 04 February 2025 09:31:36 +0000 (0:00:00.472) 0:02:00.801 ****** 2025-02-04 09:36:21.109730 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.109738 | orchestrator | 2025-02-04 09:36:21.109747 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-02-04 09:36:21.109756 | orchestrator | Tuesday 04 February 2025 09:31:38 +0000 (0:00:01.748) 0:02:02.550 ****** 2025-02-04 09:36:21.109765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-04 09:36:21.109797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-04 09:36:21.109813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-02-04 09:36:21.109823 | orchestrator | 2025-02-04 09:36:21.109832 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-02-04 09:36:21.109840 | orchestrator | Tuesday 04 February 2025 09:31:41 +0000 (0:00:03.049) 0:02:05.599 ****** 2025-02-04 09:36:21.109854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-04 09:36:21.109864 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.109873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-04 09:36:21.109883 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.109898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-02-04 09:36:21.109907 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.109916 | orchestrator | 2025-02-04 09:36:21.109931 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-02-04 09:36:21.109940 | orchestrator | Tuesday 04 February 2025 09:31:43 +0000 (0:00:01.786) 0:02:07.386 ****** 2025-02-04 09:36:21.109949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-04 09:36:21.109982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-04 09:36:21.109992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-04 09:36:21.110005 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-04 09:36:21.110064 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-04 09:36:21.110083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-02-04 09:36:21.110092 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110100 | orchestrator | 2025-02-04 09:36:21.110109 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-02-04 09:36:21.110118 | orchestrator | Tuesday 04 February 2025 09:31:46 +0000 (0:00:02.996) 0:02:10.383 ****** 2025-02-04 09:36:21.110126 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110135 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110144 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110153 | orchestrator | 2025-02-04 09:36:21.110162 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-02-04 09:36:21.110170 | orchestrator | Tuesday 04 February 2025 09:31:46 +0000 (0:00:00.506) 0:02:10.889 ****** 2025-02-04 09:36:21.110179 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110188 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110197 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110206 | orchestrator | 2025-02-04 09:36:21.110215 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-02-04 09:36:21.110231 | orchestrator | Tuesday 04 February 2025 09:31:48 +0000 (0:00:01.504) 0:02:12.394 ****** 2025-02-04 09:36:21.110240 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.110249 | orchestrator | 2025-02-04 09:36:21.110257 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-02-04 09:36:21.110266 | orchestrator | Tuesday 04 February 2025 09:31:49 +0000 (0:00:01.020) 0:02:13.414 ****** 2025-02-04 09:36:21.110275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.110285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.110341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.110364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110423 | orchestrator | 2025-02-04 09:36:21.110433 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-02-04 09:36:21.110442 | orchestrator | Tuesday 04 February 2025 09:31:55 +0000 (0:00:06.702) 0:02:20.117 ****** 2025-02-04 09:36:21.110461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.110477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110509 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.110534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110572 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-api:2024.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.110591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-scheduler:2024.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-volume:2024.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/cinder-backup:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.110629 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110637 | orchestrator | 2025-02-04 09:36:21.110646 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-02-04 09:36:21.110655 | orchestrator | Tuesday 04 February 2025 09:31:57 +0000 (0:00:01.316) 0:02:21.434 ****** 2025-02-04 09:36:21.110664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-04 09:36:21.110678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-04 09:36:21.110687 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-04 09:36:21.110705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-04 09:36:21.110713 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-04 09:36:21.110732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-02-04 09:36:21.110741 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110750 | orchestrator | 2025-02-04 09:36:21.110759 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-02-04 09:36:21.110768 | orchestrator | Tuesday 04 February 2025 09:31:58 +0000 (0:00:01.654) 0:02:23.088 ****** 2025-02-04 09:36:21.110820 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110830 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110839 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110847 | orchestrator | 2025-02-04 09:36:21.110856 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-02-04 09:36:21.110865 | orchestrator | Tuesday 04 February 2025 09:31:59 +0000 (0:00:00.537) 0:02:23.626 ****** 2025-02-04 09:36:21.110874 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110882 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110891 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110913 | orchestrator | 2025-02-04 09:36:21.110923 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-02-04 09:36:21.110931 | orchestrator | Tuesday 04 February 2025 09:32:01 +0000 (0:00:02.015) 0:02:25.641 ****** 2025-02-04 09:36:21.110940 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.110949 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.110958 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.110966 | orchestrator | 2025-02-04 09:36:21.110975 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-02-04 09:36:21.110984 | orchestrator | Tuesday 04 February 2025 09:32:01 +0000 (0:00:00.554) 0:02:26.195 ****** 2025-02-04 09:36:21.110993 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.111001 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.111010 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.111019 | orchestrator | 2025-02-04 09:36:21.111028 | orchestrator | TASK [include_role : designate] ************************************************ 2025-02-04 09:36:21.111036 | orchestrator | Tuesday 04 February 2025 09:32:02 +0000 (0:00:00.558) 0:02:26.754 ****** 2025-02-04 09:36:21.111050 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.111059 | orchestrator | 2025-02-04 09:36:21.111068 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-02-04 09:36:21.111076 | orchestrator | Tuesday 04 February 2025 09:32:03 +0000 (0:00:01.310) 0:02:28.064 ****** 2025-02-04 09:36:21.111091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz',2025-02-04 09:36:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:21.111108 | orchestrator | 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-04 09:36:21.111117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-04 09:36:21.111127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-04 09:36:21.111136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-04 09:36:21.111183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-02-04 09:36:21.111279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-04 09:36:21.111288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111348 | orchestrator | 2025-02-04 09:36:21.111356 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-02-04 09:36:21.111365 | orchestrator | Tuesday 04 February 2025 09:32:09 +0000 (0:00:06.199) 0:02:34.263 ****** 2025-02-04 09:36:21.111373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-04 09:36:21.111388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-04 09:36:21.111397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111449 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.111462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-04 09:36:21.111471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-04 09:36:21.111484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111536 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.111545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-api:2024.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-02-04 09:36:21.111557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-backend-bind9:2024.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-02-04 09:36:21.111566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-central:2024.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-mdns:2024.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-producer:2024.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-worker:2024.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/designate-sink:2024.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.111621 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.111630 | orchestrator | 2025-02-04 09:36:21.111643 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-02-04 09:36:21.111651 | orchestrator | Tuesday 04 February 2025 09:32:11 +0000 (0:00:01.418) 0:02:35.682 ****** 2025-02-04 09:36:21.111659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-04 09:36:21.111668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-04 09:36:21.111677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-04 09:36:21.111685 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.111694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-04 09:36:21.111702 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.111710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-02-04 09:36:21.111722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-02-04 09:36:21.111730 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.111738 | orchestrator | 2025-02-04 09:36:21.111746 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-02-04 09:36:21.111754 | orchestrator | Tuesday 04 February 2025 09:32:13 +0000 (0:00:01.749) 0:02:37.432 ****** 2025-02-04 09:36:21.111762 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.111785 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.111794 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.111802 | orchestrator | 2025-02-04 09:36:21.111811 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-02-04 09:36:21.111819 | orchestrator | Tuesday 04 February 2025 09:32:13 +0000 (0:00:00.434) 0:02:37.866 ****** 2025-02-04 09:36:21.111831 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.111839 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.111847 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.111855 | orchestrator | 2025-02-04 09:36:21.111864 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-02-04 09:36:21.111872 | orchestrator | Tuesday 04 February 2025 09:32:15 +0000 (0:00:02.377) 0:02:40.244 ****** 2025-02-04 09:36:21.111880 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.111888 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.111896 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.111904 | orchestrator | 2025-02-04 09:36:21.111912 | orchestrator | TASK [include_role : glance] *************************************************** 2025-02-04 09:36:21.111920 | orchestrator | Tuesday 04 February 2025 09:32:16 +0000 (0:00:00.501) 0:02:40.745 ****** 2025-02-04 09:36:21.111929 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.111937 | orchestrator | 2025-02-04 09:36:21.111945 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-02-04 09:36:21.111953 | orchestrator | Tuesday 04 February 2025 09:32:18 +0000 (0:00:01.770) 0:02:42.516 ****** 2025-02-04 09:36:21.111961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-04 09:36:21.111986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.112001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-04 09:36:21.112015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.112030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-02-04 09:36:21.112051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.112060 | orchestrator | 2025-02-04 09:36:21.112068 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-02-04 09:36:21.112076 | orchestrator | Tuesday 04 February 2025 09:32:27 +0000 (0:00:09.809) 0:02:52.325 ****** 2025-02-04 09:36:21.112090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-04 09:36:21.112109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-04 09:36:21.112122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.112243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.112263 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.112272 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-api:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-02-04 09:36:21.112306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/glance-tls-proxy:2024.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.112320 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112330 | orchestrator | 2025-02-04 09:36:21.112338 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-02-04 09:36:21.112347 | orchestrator | Tuesday 04 February 2025 09:32:37 +0000 (0:00:09.305) 0:03:01.630 ****** 2025-02-04 09:36:21.112356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-04 09:36:21.112365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-04 09:36:21.112374 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-04 09:36:21.112404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-04 09:36:21.112424 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-04 09:36:21.112452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-02-04 09:36:21.112502 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.112517 | orchestrator | 2025-02-04 09:36:21.112530 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-02-04 09:36:21.112543 | orchestrator | Tuesday 04 February 2025 09:32:44 +0000 (0:00:07.005) 0:03:08.636 ****** 2025-02-04 09:36:21.112557 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112567 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112575 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.112583 | orchestrator | 2025-02-04 09:36:21.112592 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-02-04 09:36:21.112600 | orchestrator | Tuesday 04 February 2025 09:32:44 +0000 (0:00:00.510) 0:03:09.146 ****** 2025-02-04 09:36:21.112608 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112616 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112624 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.112632 | orchestrator | 2025-02-04 09:36:21.112640 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-02-04 09:36:21.112648 | orchestrator | Tuesday 04 February 2025 09:32:46 +0000 (0:00:01.361) 0:03:10.508 ****** 2025-02-04 09:36:21.112656 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112664 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112672 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.112681 | orchestrator | 2025-02-04 09:36:21.112689 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-02-04 09:36:21.112697 | orchestrator | Tuesday 04 February 2025 09:32:46 +0000 (0:00:00.361) 0:03:10.870 ****** 2025-02-04 09:36:21.112705 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.112713 | orchestrator | 2025-02-04 09:36:21.112721 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-02-04 09:36:21.112729 | orchestrator | Tuesday 04 February 2025 09:32:47 +0000 (0:00:01.301) 0:03:12.171 ****** 2025-02-04 09:36:21.112738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:36:21.112764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:36:21.112796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:36:21.112806 | orchestrator | 2025-02-04 09:36:21.112815 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-02-04 09:36:21.112823 | orchestrator | Tuesday 04 February 2025 09:32:52 +0000 (0:00:04.934) 0:03:17.106 ****** 2025-02-04 09:36:21.112831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:36:21.112839 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:36:21.112857 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:36:21.112877 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.112886 | orchestrator | 2025-02-04 09:36:21.112895 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-02-04 09:36:21.112904 | orchestrator | Tuesday 04 February 2025 09:32:53 +0000 (0:00:00.500) 0:03:17.606 ****** 2025-02-04 09:36:21.112919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-04 09:36:21.112932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-04 09:36:21.112942 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.112951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-04 09:36:21.112960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-04 09:36:21.112969 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.112978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-02-04 09:36:21.112987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-02-04 09:36:21.112996 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113006 | orchestrator | 2025-02-04 09:36:21.113014 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-02-04 09:36:21.113023 | orchestrator | Tuesday 04 February 2025 09:32:54 +0000 (0:00:01.060) 0:03:18.666 ****** 2025-02-04 09:36:21.113032 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113041 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113051 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113060 | orchestrator | 2025-02-04 09:36:21.113069 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-02-04 09:36:21.113078 | orchestrator | Tuesday 04 February 2025 09:32:54 +0000 (0:00:00.494) 0:03:19.161 ****** 2025-02-04 09:36:21.113087 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113096 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113104 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113112 | orchestrator | 2025-02-04 09:36:21.113120 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-02-04 09:36:21.113128 | orchestrator | Tuesday 04 February 2025 09:32:56 +0000 (0:00:01.239) 0:03:20.400 ****** 2025-02-04 09:36:21.113136 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.113144 | orchestrator | 2025-02-04 09:36:21.113152 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-02-04 09:36:21.113160 | orchestrator | Tuesday 04 February 2025 09:32:57 +0000 (0:00:01.339) 0:03:21.740 ****** 2025-02-04 09:36:21.113169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.113178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.113203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.113212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.113222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.113231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.113246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.113258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.113267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.113276 | orchestrator | 2025-02-04 09:36:21.113284 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-02-04 09:36:21.113292 | orchestrator | Tuesday 04 February 2025 09:33:06 +0000 (0:00:08.994) 0:03:30.734 ****** 2025-02-04 09:36:21.113306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.113315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.113329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.113338 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.113364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.113373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.113381 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api:2024.1', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.113403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-api-cfn:2024.1', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.113411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/heat-engine:2024.1', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.113420 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113428 | orchestrator | 2025-02-04 09:36:21.113436 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-02-04 09:36:21.113447 | orchestrator | Tuesday 04 February 2025 09:33:07 +0000 (0:00:01.024) 0:03:31.758 ****** 2025-02-04 09:36:21.113456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113508 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113536 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-02-04 09:36:21.113582 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113590 | orchestrator | 2025-02-04 09:36:21.113598 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-02-04 09:36:21.113606 | orchestrator | Tuesday 04 February 2025 09:33:08 +0000 (0:00:01.235) 0:03:32.994 ****** 2025-02-04 09:36:21.113615 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113623 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113631 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113639 | orchestrator | 2025-02-04 09:36:21.113647 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-02-04 09:36:21.113655 | orchestrator | Tuesday 04 February 2025 09:33:09 +0000 (0:00:00.446) 0:03:33.440 ****** 2025-02-04 09:36:21.113663 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113671 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113679 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113687 | orchestrator | 2025-02-04 09:36:21.113695 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-02-04 09:36:21.113703 | orchestrator | Tuesday 04 February 2025 09:33:10 +0000 (0:00:01.503) 0:03:34.943 ****** 2025-02-04 09:36:21.113711 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.113719 | orchestrator | 2025-02-04 09:36:21.113728 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-02-04 09:36:21.113736 | orchestrator | Tuesday 04 February 2025 09:33:11 +0000 (0:00:01.148) 0:03:36.092 ****** 2025-02-04 09:36:21.113756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:36:21.113787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:36:21.113808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:36:21.113828 | orchestrator | 2025-02-04 09:36:21.113836 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-02-04 09:36:21.113844 | orchestrator | Tuesday 04 February 2025 09:33:17 +0000 (0:00:05.732) 0:03:41.824 ****** 2025-02-04 09:36:21.113859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:36:21.113868 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.113877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:36:21.113896 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.113908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:36:21.113917 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.113926 | orchestrator | 2025-02-04 09:36:21.113934 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-02-04 09:36:21.113949 | orchestrator | Tuesday 04 February 2025 09:33:18 +0000 (0:00:00.973) 0:03:42.797 ****** 2025-02-04 09:36:21.113957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-04 09:36:21.113967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-04 09:36:21.113977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-04 09:36:21.113986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-04 09:36:21.113995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-04 09:36:21.114004 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.114038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-04 09:36:21.114049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-04 09:36:21.114058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-04 09:36:21.114066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-04 09:36:21.114074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-04 09:36:21.114084 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.114097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-04 09:36:21.114106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-04 09:36:21.114119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-02-04 09:36:21.114128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-02-04 09:36:21.114137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-02-04 09:36:21.114149 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.114157 | orchestrator | 2025-02-04 09:36:21.114166 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-02-04 09:36:21.114175 | orchestrator | Tuesday 04 February 2025 09:33:20 +0000 (0:00:01.581) 0:03:44.378 ****** 2025-02-04 09:36:21.114183 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.114191 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.114199 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.114207 | orchestrator | 2025-02-04 09:36:21.114215 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-02-04 09:36:21.114227 | orchestrator | Tuesday 04 February 2025 09:33:20 +0000 (0:00:00.669) 0:03:45.048 ****** 2025-02-04 09:36:21.114235 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.114243 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.114251 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.114259 | orchestrator | 2025-02-04 09:36:21.114268 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-02-04 09:36:21.114276 | orchestrator | Tuesday 04 February 2025 09:33:22 +0000 (0:00:01.869) 0:03:46.918 ****** 2025-02-04 09:36:21.114284 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.114292 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.114304 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.114312 | orchestrator | 2025-02-04 09:36:21.114320 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-02-04 09:36:21.114328 | orchestrator | Tuesday 04 February 2025 09:33:23 +0000 (0:00:00.601) 0:03:47.519 ****** 2025-02-04 09:36:21.114336 | orchestrator | included: ironic for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.114344 | orchestrator | 2025-02-04 09:36:21.114353 | orchestrator | TASK [haproxy-config : Copying over ironic haproxy config] ********************* 2025-02-04 09:36:21.114361 | orchestrator | Tuesday 04 February 2025 09:33:24 +0000 (0:00:01.237) 0:03:48.756 ****** 2025-02-04 09:36:21.114370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.114384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.114399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.114414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.114429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.114443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.114473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-04 09:36:21.114489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-04 09:36:21.114498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-04 09:36:21.114508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-04 09:36:21.114516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-04 09:36:21.114525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-04 09:36:21.114545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-04 09:36:21.114558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-04 09:36:21.114659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-04 09:36:21.114672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-04 09:36:21.114681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}}) 2025-02-04 09:36:21.114699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-04 09:36:21.114708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-04 09:36:21.114767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-04 09:36:21.114819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-04 09:36:21.114828 | orchestrator | 2025-02-04 09:36:21.114837 | orchestrator | TASK [haproxy-config : Add configuration for ironic when using single external frontend] *** 2025-02-04 09:36:21.114845 | orchestrator | Tuesday 04 February 2025 09:33:33 +0000 (0:00:09.150) 0:03:57.906 ****** 2025-02-04 09:36:21.114854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.114863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.114872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-04 09:36:21.114896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-04 09:36:21.114951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-04 09:36:21.114964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-04 09:36:21.114972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-04 09:36:21.114981 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.114990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.114999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.115023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-04 09:36:21.115073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-04 09:36:21.115083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-04 09:36:21.115091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-04 09:36:21.115105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-api', 'value': {'container_name': 'ironic_api', 'group': 'ironic-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-api:2024.1', 'volumes': ['/etc/kolla/ironic-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6385'], 'timeout': '30'}, 'haproxy': {'ironic_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}, 'ironic_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.115118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-04 09:36:21.115125 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.115133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-conductor', 'value': {'container_name': 'ironic_conductor', 'group': 'ironic-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-conductor:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/sys:/sys', '/dev:/dev', '/run:/run:shared', 'kolla_logs:/var/log/kolla', 'ironic:/var/lib/ironic', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.115174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-inspector', 'value': {'container_name': 'ironic_inspector', 'group': 'ironic-inspector', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-inspector:2024.1', 'privileged': True, 'volumes': ['/etc/kolla/ironic-inspector/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/var/lib/ironic-inspector/dhcp-hostsdir', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-inspector 5672'], 'timeout': '30'}, 'haproxy': {'ironic_inspector': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}, 'ironic_inspector_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}}}})  2025-02-04 09:36:21.115205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-tftp', 'value': {'container_name': 'ironic_tftp', 'group': 'ironic-tftp', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'environment': {'TFTPBOOT_PATH': '/var/lib/ironic/tftpboot', 'HTTPBOOT_PATH': '/var/lib/ironic/httpboot'}, 'volumes': ['/etc/kolla/ironic-tftp/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}}})  2025-02-04 09:36:21.115213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-http', 'value': {'container_name': 'ironic_http', 'group': 'ironic-http', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-pxe:2024.1', 'volumes': ['/etc/kolla/ironic-http/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ironic:/var/lib/ironic', 'kolla_logs:/var/log/kolla'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen apache2 8089'], 'timeout': '30'}}})  2025-02-04 09:36:21.115221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-dnsmasq', 'value': {'container_name': 'ironic_dnsmasq', 'group': 'ironic-inspector', 'enabled': 'no', 'cap_add': ['NET_ADMIN', 'NET_RAW'], 'image': 'nexus.testbed.osism.xyz:8193/kolla/dnsmasq:2024.1', 'volumes': ['/etc/kolla/ironic-dnsmasq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_inspector_dhcp_hosts:/etc/dnsmasq/dhcp-hostsdir:ro'], 'dimensions': {}}})  2025-02-04 09:36:21.115246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-prometheus-exporter', 'value': {'container_name': 'ironic_prometheus_exporter', 'group': 'ironic-conductor', 'enabled': False, 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-prometheus-exporter:2024.1', 'volumes': ['/etc/kolla/ironic-prometheus-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'ironic_prometheus_exporter_data:/var/lib/ironic-metrics'], 'dimensions': {}}})  2025-02-04 09:36:21.115255 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.115263 | orchestrator | 2025-02-04 09:36:21.115271 | orchestrator | TASK [haproxy-config : Configuring firewall for ironic] ************************ 2025-02-04 09:36:21.115278 | orchestrator | Tuesday 04 February 2025 09:33:34 +0000 (0:00:01.010) 0:03:58.916 ****** 2025-02-04 09:36:21.115287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-04 09:36:21.115294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-04 09:36:21.115302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-04 09:36:21.115311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-04 09:36:21.115361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-04 09:36:21.115375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-04 09:36:21.115383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-04 09:36:21.115392 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.115400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-04 09:36:21.115418 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.115426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-04 09:36:21.115434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6385', 'listen_port': '6385', 'tls_backend': 'no'}})  2025-02-04 09:36:21.115442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '5050', 'listen_port': '5050'}})  2025-02-04 09:36:21.115449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic_inspector_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5050', 'listen_port': '5050'}})  2025-02-04 09:36:21.115457 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.115465 | orchestrator | 2025-02-04 09:36:21.115477 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL users config] ************* 2025-02-04 09:36:21.115485 | orchestrator | Tuesday 04 February 2025 09:33:35 +0000 (0:00:01.267) 0:04:00.183 ****** 2025-02-04 09:36:21.115492 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.115500 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.115507 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.115514 | orchestrator | 2025-02-04 09:36:21.115522 | orchestrator | TASK [proxysql-config : Copying over ironic ProxySQL rules config] ************* 2025-02-04 09:36:21.115529 | orchestrator | Tuesday 04 February 2025 09:33:36 +0000 (0:00:00.478) 0:04:00.662 ****** 2025-02-04 09:36:21.115536 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.115544 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.115551 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.115559 | orchestrator | 2025-02-04 09:36:21.115566 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-02-04 09:36:21.115574 | orchestrator | Tuesday 04 February 2025 09:33:37 +0000 (0:00:01.400) 0:04:02.063 ****** 2025-02-04 09:36:21.115581 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.115588 | orchestrator | 2025-02-04 09:36:21.115596 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-02-04 09:36:21.115603 | orchestrator | Tuesday 04 February 2025 09:33:39 +0000 (0:00:01.344) 0:04:03.407 ****** 2025-02-04 09:36:21.115611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:36:21.115662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:36:21.115675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:36:21.115689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:36:21.115697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:36:21.115713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:36:21.115721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:36:21.115764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:36:21.115788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:36:21.115801 | orchestrator | 2025-02-04 09:36:21.115809 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-02-04 09:36:21.115816 | orchestrator | Tuesday 04 February 2025 09:33:44 +0000 (0:00:05.287) 0:04:08.695 ****** 2025-02-04 09:36:21.115831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:36:21.115839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:36:21.115847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:36:21.115854 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.115897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:36:21.115908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:36:21.115921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:36:21.115928 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.115943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:36:21.115952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:36:21.115994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:36:21.116070 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116091 | orchestrator | 2025-02-04 09:36:21.116099 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-02-04 09:36:21.116107 | orchestrator | Tuesday 04 February 2025 09:33:45 +0000 (0:00:00.784) 0:04:09.480 ****** 2025-02-04 09:36:21.116117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-04 09:36:21.116136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-04 09:36:21.116144 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-04 09:36:21.116159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-04 09:36:21.116166 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-04 09:36:21.116182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-02-04 09:36:21.116190 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116197 | orchestrator | 2025-02-04 09:36:21.116206 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-02-04 09:36:21.116213 | orchestrator | Tuesday 04 February 2025 09:33:46 +0000 (0:00:01.503) 0:04:10.983 ****** 2025-02-04 09:36:21.116221 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116228 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116235 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116243 | orchestrator | 2025-02-04 09:36:21.116250 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-02-04 09:36:21.116258 | orchestrator | Tuesday 04 February 2025 09:33:47 +0000 (0:00:00.436) 0:04:11.420 ****** 2025-02-04 09:36:21.116266 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116281 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116289 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116297 | orchestrator | 2025-02-04 09:36:21.116305 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-02-04 09:36:21.116312 | orchestrator | Tuesday 04 February 2025 09:33:48 +0000 (0:00:01.664) 0:04:13.084 ****** 2025-02-04 09:36:21.116320 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116328 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116335 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116343 | orchestrator | 2025-02-04 09:36:21.116350 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-02-04 09:36:21.116358 | orchestrator | Tuesday 04 February 2025 09:33:49 +0000 (0:00:00.603) 0:04:13.688 ****** 2025-02-04 09:36:21.116365 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.116373 | orchestrator | 2025-02-04 09:36:21.116383 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-02-04 09:36:21.116391 | orchestrator | Tuesday 04 February 2025 09:33:51 +0000 (0:00:01.707) 0:04:15.396 ****** 2025-02-04 09:36:21.116399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-04 09:36:21.116455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.116468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-04 09:36:21.116476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.116484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-02-04 09:36:21.116492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.116503 | orchestrator | 2025-02-04 09:36:21.116544 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-02-04 09:36:21.116555 | orchestrator | Tuesday 04 February 2025 09:33:56 +0000 (0:00:05.397) 0:04:20.793 ****** 2025-02-04 09:36:21.116562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-04 09:36:21.116571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.116578 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-04 09:36:21.116593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.116605 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-api:2024.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-02-04 09:36:21.116657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/magnum-conductor:2024.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.116665 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116672 | orchestrator | 2025-02-04 09:36:21.116680 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-02-04 09:36:21.116687 | orchestrator | Tuesday 04 February 2025 09:33:57 +0000 (0:00:01.226) 0:04:22.020 ****** 2025-02-04 09:36:21.116694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-04 09:36:21.116702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-04 09:36:21.116709 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-04 09:36:21.116724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-04 09:36:21.116731 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-02-04 09:36:21.116745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-02-04 09:36:21.116752 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116760 | orchestrator | 2025-02-04 09:36:21.116813 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-02-04 09:36:21.116822 | orchestrator | Tuesday 04 February 2025 09:33:59 +0000 (0:00:01.914) 0:04:23.934 ****** 2025-02-04 09:36:21.116829 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116836 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116843 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116851 | orchestrator | 2025-02-04 09:36:21.116858 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-02-04 09:36:21.116865 | orchestrator | Tuesday 04 February 2025 09:34:00 +0000 (0:00:00.609) 0:04:24.543 ****** 2025-02-04 09:36:21.116873 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.116880 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.116887 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.116895 | orchestrator | 2025-02-04 09:36:21.116902 | orchestrator | TASK [include_role : manila] *************************************************** 2025-02-04 09:36:21.116919 | orchestrator | Tuesday 04 February 2025 09:34:01 +0000 (0:00:01.285) 0:04:25.828 ****** 2025-02-04 09:36:21.116927 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.116934 | orchestrator | 2025-02-04 09:36:21.116942 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-02-04 09:36:21.116949 | orchestrator | Tuesday 04 February 2025 09:34:02 +0000 (0:00:01.495) 0:04:27.324 ****** 2025-02-04 09:36:21.116995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-04 09:36:21.117006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-04 09:36:21.117044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-02-04 09:36:21.117111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117138 | orchestrator | 2025-02-04 09:36:21.117145 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-02-04 09:36:21.117152 | orchestrator | Tuesday 04 February 2025 09:34:06 +0000 (0:00:04.003) 0:04:31.327 ****** 2025-02-04 09:36:21.117196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-04 09:36:21.117206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117234 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.117241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-04 09:36:21.117249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117309 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.117317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-02-04 09:36:21.117329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-share:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'nexus.testbed.osism.xyz:8193/kolla/manila-data:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.117360 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.117366 | orchestrator | 2025-02-04 09:36:21.117373 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-02-04 09:36:21.117379 | orchestrator | Tuesday 04 February 2025 09:34:07 +0000 (0:00:00.867) 0:04:32.195 ****** 2025-02-04 09:36:21.117386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-04 09:36:21.117425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-04 09:36:21.117434 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.117440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-04 09:36:21.117447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-04 09:36:21.117453 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.117460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-02-04 09:36:21.117466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-02-04 09:36:21.117476 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.117483 | orchestrator | 2025-02-04 09:36:21.117489 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-02-04 09:36:21.117496 | orchestrator | Tuesday 04 February 2025 09:34:08 +0000 (0:00:00.982) 0:04:33.178 ****** 2025-02-04 09:36:21.117502 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.117515 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.117523 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.117529 | orchestrator | 2025-02-04 09:36:21.117536 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-02-04 09:36:21.117543 | orchestrator | Tuesday 04 February 2025 09:34:09 +0000 (0:00:00.399) 0:04:33.577 ****** 2025-02-04 09:36:21.117549 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.117556 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.117567 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.117574 | orchestrator | 2025-02-04 09:36:21.117581 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-02-04 09:36:21.117588 | orchestrator | Tuesday 04 February 2025 09:34:10 +0000 (0:00:01.157) 0:04:34.735 ****** 2025-02-04 09:36:21.117594 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.117601 | orchestrator | 2025-02-04 09:36:21.117608 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-02-04 09:36:21.117615 | orchestrator | Tuesday 04 February 2025 09:34:11 +0000 (0:00:01.119) 0:04:35.854 ****** 2025-02-04 09:36:21.117622 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:36:21.117628 | orchestrator | 2025-02-04 09:36:21.117635 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-02-04 09:36:21.117642 | orchestrator | Tuesday 04 February 2025 09:34:14 +0000 (0:00:02.911) 0:04:38.766 ****** 2025-02-04 09:36:21.117649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:36:21.117692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-04 09:36:21.117707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:36:21.117716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-04 09:36:21.117746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:36:21.117759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-04 09:36:21.117767 | orchestrator | 2025-02-04 09:36:21.117811 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-02-04 09:36:21.117818 | orchestrator | Tuesday 04 February 2025 09:34:18 +0000 (0:00:04.025) 0:04:42.792 ****** 2025-02-04 09:36:21.117825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-04 09:36:21.117832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-04 09:36:21.117839 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.117872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-04 09:36:21.117886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-04 09:36:21.117893 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.117900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-02-04 09:36:21.117972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-02-04 09:36:21.117988 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.117997 | orchestrator | 2025-02-04 09:36:21.118006 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-02-04 09:36:21.118037 | orchestrator | Tuesday 04 February 2025 09:34:21 +0000 (0:00:03.058) 0:04:45.850 ****** 2025-02-04 09:36:21.118057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-04 09:36:21.118069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-04 09:36:21.118080 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-04 09:36:21.118102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-04 09:36:21.118113 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-04 09:36:21.118181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-02-04 09:36:21.118206 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118214 | orchestrator | 2025-02-04 09:36:21.118221 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-02-04 09:36:21.118228 | orchestrator | Tuesday 04 February 2025 09:34:25 +0000 (0:00:04.295) 0:04:50.146 ****** 2025-02-04 09:36:21.118234 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118241 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118248 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118255 | orchestrator | 2025-02-04 09:36:21.118262 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-02-04 09:36:21.118269 | orchestrator | Tuesday 04 February 2025 09:34:26 +0000 (0:00:00.387) 0:04:50.534 ****** 2025-02-04 09:36:21.118276 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118283 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118290 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118297 | orchestrator | 2025-02-04 09:36:21.118311 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-02-04 09:36:21.118318 | orchestrator | Tuesday 04 February 2025 09:34:27 +0000 (0:00:01.602) 0:04:52.136 ****** 2025-02-04 09:36:21.118325 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118331 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118338 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118345 | orchestrator | 2025-02-04 09:36:21.118351 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-02-04 09:36:21.118358 | orchestrator | Tuesday 04 February 2025 09:34:28 +0000 (0:00:00.686) 0:04:52.823 ****** 2025-02-04 09:36:21.118365 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.118371 | orchestrator | 2025-02-04 09:36:21.118378 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-02-04 09:36:21.118384 | orchestrator | Tuesday 04 February 2025 09:34:30 +0000 (0:00:01.905) 0:04:54.729 ****** 2025-02-04 09:36:21.118392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-04 09:36:21.118399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-04 09:36:21.118411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-02-04 09:36:21.118418 | orchestrator | 2025-02-04 09:36:21.118425 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-02-04 09:36:21.118432 | orchestrator | Tuesday 04 February 2025 09:34:32 +0000 (0:00:01.882) 0:04:56.611 ****** 2025-02-04 09:36:21.118475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-04 09:36:21.118485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-04 09:36:21.118492 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118499 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'nexus.testbed.osism.xyz:8193/kolla/memcached:2024.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-02-04 09:36:21.118513 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118520 | orchestrator | 2025-02-04 09:36:21.118526 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-02-04 09:36:21.118537 | orchestrator | Tuesday 04 February 2025 09:34:32 +0000 (0:00:00.441) 0:04:57.053 ****** 2025-02-04 09:36:21.118545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-04 09:36:21.118552 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-04 09:36:21.118566 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-02-04 09:36:21.118579 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118586 | orchestrator | 2025-02-04 09:36:21.118592 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-02-04 09:36:21.118599 | orchestrator | Tuesday 04 February 2025 09:34:33 +0000 (0:00:01.123) 0:04:58.176 ****** 2025-02-04 09:36:21.118606 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118613 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118619 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118626 | orchestrator | 2025-02-04 09:36:21.118633 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-02-04 09:36:21.118640 | orchestrator | Tuesday 04 February 2025 09:34:34 +0000 (0:00:00.579) 0:04:58.756 ****** 2025-02-04 09:36:21.118646 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118653 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118660 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118666 | orchestrator | 2025-02-04 09:36:21.118673 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-02-04 09:36:21.118680 | orchestrator | Tuesday 04 February 2025 09:34:35 +0000 (0:00:01.407) 0:05:00.163 ****** 2025-02-04 09:36:21.118687 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.118724 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.118734 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.118741 | orchestrator | 2025-02-04 09:36:21.118747 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-02-04 09:36:21.118762 | orchestrator | Tuesday 04 February 2025 09:34:36 +0000 (0:00:00.588) 0:05:00.751 ****** 2025-02-04 09:36:21.118768 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.118790 | orchestrator | 2025-02-04 09:36:21.118796 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-02-04 09:36:21.118803 | orchestrator | Tuesday 04 February 2025 09:34:38 +0000 (0:00:01.861) 0:05:02.613 ****** 2025-02-04 09:36:21.118809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-04 09:36:21.118821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-04 09:36:21.118828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-04 09:36:21.118951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-04 09:36:21.118961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.118981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.118988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.118996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:36:21.119115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:36:21.119123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-02-04 09:36:21.119193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.119206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.119226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-04 09:36:21.119295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:36:21.119369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.119412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119426 | orchestrator | 2025-02-04 09:36:21.119433 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-02-04 09:36:21.119440 | orchestrator | Tuesday 04 February 2025 09:34:44 +0000 (0:00:05.810) 0:05:08.423 ****** 2025-02-04 09:36:21.119451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-04 09:36:21.119471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-04 09:36:21.119504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-04 09:36:21.119533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-server:2024.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-02-04 09:36:21.119575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-openvswitch-agent:2024.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-04 09:36:21.119635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-linuxbridge-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-dhcp-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:36:21.119695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-l3-agent:2024.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-02-04 09:36:21.119703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-sriov-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-mlnx-agent:2024.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.119800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-eswitchd:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119869 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.119881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metadata-agent:2024.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.119907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:36:21.119915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-bgp-dragent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-infoblox-ipam-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-02-04 09:36:21.119941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-metering-agent:2024.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-02-04 09:36:21.119973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.119981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/ironic-neutron-agent:2024.1', 'privileged': False, 'enabled': True, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.119996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.120004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-tls-proxy:2024.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-02-04 09:36:21.120015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/neutron-ovn-agent:2024.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-02-04 09:36:21.120041 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'nexus.testbed.osism.xyz:8193/kolla/neutron-ovn-vpn-agent:2024.1', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120055 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120062 | orchestrator | 2025-02-04 09:36:21.120068 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-02-04 09:36:21.120075 | orchestrator | Tuesday 04 February 2025 09:34:46 +0000 (0:00:02.120) 0:05:10.544 ****** 2025-02-04 09:36:21.120081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-04 09:36:21.120088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-04 09:36:21.120095 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-04 09:36:21.120108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-02-04 09:36:21.120114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-04 09:36:21.120125 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-02-04 09:36:21.120141 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120147 | orchestrator | 2025-02-04 09:36:21.120154 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-02-04 09:36:21.120163 | orchestrator | Tuesday 04 February 2025 09:34:48 +0000 (0:00:02.050) 0:05:12.594 ****** 2025-02-04 09:36:21.120169 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120175 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120182 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120188 | orchestrator | 2025-02-04 09:36:21.120194 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-02-04 09:36:21.120201 | orchestrator | Tuesday 04 February 2025 09:34:48 +0000 (0:00:00.576) 0:05:13.171 ****** 2025-02-04 09:36:21.120207 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120213 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120219 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120226 | orchestrator | 2025-02-04 09:36:21.120232 | orchestrator | TASK [include_role : placement] ************************************************ 2025-02-04 09:36:21.120238 | orchestrator | Tuesday 04 February 2025 09:34:50 +0000 (0:00:01.808) 0:05:14.979 ****** 2025-02-04 09:36:21.120245 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.120251 | orchestrator | 2025-02-04 09:36:21.120258 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-02-04 09:36:21.120264 | orchestrator | Tuesday 04 February 2025 09:34:52 +0000 (0:00:01.916) 0:05:16.896 ****** 2025-02-04 09:36:21.120287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.120296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.120302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.120313 | orchestrator | 2025-02-04 09:36:21.120320 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-02-04 09:36:21.120326 | orchestrator | Tuesday 04 February 2025 09:34:57 +0000 (0:00:04.592) 0:05:21.489 ****** 2025-02-04 09:36:21.120332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.120339 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.120359 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/placement-api:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.120385 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120392 | orchestrator | 2025-02-04 09:36:21.120398 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-02-04 09:36:21.120404 | orchestrator | Tuesday 04 February 2025 09:34:58 +0000 (0:00:00.884) 0:05:22.374 ****** 2025-02-04 09:36:21.120414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120428 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120448 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120467 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120474 | orchestrator | 2025-02-04 09:36:21.120480 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-02-04 09:36:21.120486 | orchestrator | Tuesday 04 February 2025 09:34:59 +0000 (0:00:01.410) 0:05:23.784 ****** 2025-02-04 09:36:21.120492 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120499 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120508 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120514 | orchestrator | 2025-02-04 09:36:21.120521 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-02-04 09:36:21.120527 | orchestrator | Tuesday 04 February 2025 09:34:59 +0000 (0:00:00.391) 0:05:24.175 ****** 2025-02-04 09:36:21.120533 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120540 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120546 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120553 | orchestrator | 2025-02-04 09:36:21.120560 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-02-04 09:36:21.120566 | orchestrator | Tuesday 04 February 2025 09:35:01 +0000 (0:00:01.880) 0:05:26.055 ****** 2025-02-04 09:36:21.120573 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.120579 | orchestrator | 2025-02-04 09:36:21.120585 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-02-04 09:36:21.120592 | orchestrator | Tuesday 04 February 2025 09:35:03 +0000 (0:00:02.069) 0:05:28.125 ****** 2025-02-04 09:36:21.120610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.120627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.120659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.120675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120702 | orchestrator | 2025-02-04 09:36:21.120709 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-02-04 09:36:21.120715 | orchestrator | Tuesday 04 February 2025 09:35:10 +0000 (0:00:06.553) 0:05:34.679 ****** 2025-02-04 09:36:21.120722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.120747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120762 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.120811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120825 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-api:2024.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.120864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-scheduler:2024.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'nexus.testbed.osism.xyz:8193/kolla/nova-super-conductor:2024.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.120877 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.120884 | orchestrator | 2025-02-04 09:36:21.120891 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-02-04 09:36:21.120897 | orchestrator | Tuesday 04 February 2025 09:35:11 +0000 (0:00:01.569) 0:05:36.248 ****** 2025-02-04 09:36:21.120904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120930 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.120936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120973 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.120992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-02-04 09:36:21.120999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-04 09:36:21.121006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-02-04 09:36:21.121012 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121018 | orchestrator | 2025-02-04 09:36:21.121024 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-02-04 09:36:21.121030 | orchestrator | Tuesday 04 February 2025 09:35:13 +0000 (0:00:01.271) 0:05:37.520 ****** 2025-02-04 09:36:21.121036 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121042 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121048 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121054 | orchestrator | 2025-02-04 09:36:21.121060 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-02-04 09:36:21.121066 | orchestrator | Tuesday 04 February 2025 09:35:13 +0000 (0:00:00.626) 0:05:38.146 ****** 2025-02-04 09:36:21.121072 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121078 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121085 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121091 | orchestrator | 2025-02-04 09:36:21.121097 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-02-04 09:36:21.121103 | orchestrator | Tuesday 04 February 2025 09:35:15 +0000 (0:00:01.926) 0:05:40.073 ****** 2025-02-04 09:36:21.121109 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.121115 | orchestrator | 2025-02-04 09:36:21.121121 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-02-04 09:36:21.121127 | orchestrator | Tuesday 04 February 2025 09:35:17 +0000 (0:00:02.292) 0:05:42.365 ****** 2025-02-04 09:36:21.121133 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-02-04 09:36:21.121139 | orchestrator | 2025-02-04 09:36:21.121148 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-02-04 09:36:21.121155 | orchestrator | Tuesday 04 February 2025 09:35:19 +0000 (0:00:01.503) 0:05:43.869 ****** 2025-02-04 09:36:21.121161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-04 09:36:21.121172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-04 09:36:21.121178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-02-04 09:36:21.121185 | orchestrator | 2025-02-04 09:36:21.121191 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-02-04 09:36:21.121197 | orchestrator | Tuesday 04 February 2025 09:35:25 +0000 (0:00:06.466) 0:05:50.336 ****** 2025-02-04 09:36:21.121215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121222 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121239 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121252 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121258 | orchestrator | 2025-02-04 09:36:21.121264 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-02-04 09:36:21.121270 | orchestrator | Tuesday 04 February 2025 09:35:28 +0000 (0:00:02.174) 0:05:52.510 ****** 2025-02-04 09:36:21.121276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-04 09:36:21.121283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-04 09:36:21.121294 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-04 09:36:21.121309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-04 09:36:21.121315 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-04 09:36:21.121328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-02-04 09:36:21.121334 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121340 | orchestrator | 2025-02-04 09:36:21.121346 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-04 09:36:21.121352 | orchestrator | Tuesday 04 February 2025 09:35:30 +0000 (0:00:02.199) 0:05:54.710 ****** 2025-02-04 09:36:21.121358 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121364 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121370 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121376 | orchestrator | 2025-02-04 09:36:21.121382 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-04 09:36:21.121388 | orchestrator | Tuesday 04 February 2025 09:35:30 +0000 (0:00:00.485) 0:05:55.195 ****** 2025-02-04 09:36:21.121394 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121400 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121406 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121412 | orchestrator | 2025-02-04 09:36:21.121418 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-02-04 09:36:21.121424 | orchestrator | Tuesday 04 February 2025 09:35:31 +0000 (0:00:01.083) 0:05:56.278 ****** 2025-02-04 09:36:21.121430 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-02-04 09:36:21.121436 | orchestrator | 2025-02-04 09:36:21.121442 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-02-04 09:36:21.121448 | orchestrator | Tuesday 04 February 2025 09:35:33 +0000 (0:00:01.527) 0:05:57.806 ****** 2025-02-04 09:36:21.121468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121474 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121492 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121504 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121510 | orchestrator | 2025-02-04 09:36:21.121516 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-02-04 09:36:21.121522 | orchestrator | Tuesday 04 February 2025 09:35:35 +0000 (0:00:02.127) 0:05:59.933 ****** 2025-02-04 09:36:21.121528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121534 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121546 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-02-04 09:36:21.121558 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121564 | orchestrator | 2025-02-04 09:36:21.121570 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-02-04 09:36:21.121576 | orchestrator | Tuesday 04 February 2025 09:35:37 +0000 (0:00:02.164) 0:06:02.098 ****** 2025-02-04 09:36:21.121582 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121588 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121594 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121600 | orchestrator | 2025-02-04 09:36:21.121606 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-04 09:36:21.121612 | orchestrator | Tuesday 04 February 2025 09:35:40 +0000 (0:00:02.872) 0:06:04.970 ****** 2025-02-04 09:36:21.121618 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121626 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121632 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121638 | orchestrator | 2025-02-04 09:36:21.121644 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-04 09:36:21.121650 | orchestrator | Tuesday 04 February 2025 09:35:41 +0000 (0:00:00.807) 0:06:05.778 ****** 2025-02-04 09:36:21.121660 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121666 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121672 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121678 | orchestrator | 2025-02-04 09:36:21.121685 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-02-04 09:36:21.121690 | orchestrator | Tuesday 04 February 2025 09:35:42 +0000 (0:00:01.077) 0:06:06.855 ****** 2025-02-04 09:36:21.121696 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-02-04 09:36:21.121702 | orchestrator | 2025-02-04 09:36:21.121709 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-02-04 09:36:21.121715 | orchestrator | Tuesday 04 February 2025 09:35:43 +0000 (0:00:01.291) 0:06:08.147 ****** 2025-02-04 09:36:21.121725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-04 09:36:21.121731 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-04 09:36:21.121743 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-04 09:36:21.121756 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121762 | orchestrator | 2025-02-04 09:36:21.121769 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-02-04 09:36:21.121786 | orchestrator | Tuesday 04 February 2025 09:35:45 +0000 (0:00:01.377) 0:06:09.525 ****** 2025-02-04 09:36:21.121792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-04 09:36:21.121798 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-04 09:36:21.121814 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-02-04 09:36:21.121831 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121837 | orchestrator | 2025-02-04 09:36:21.121843 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-02-04 09:36:21.121849 | orchestrator | Tuesday 04 February 2025 09:35:47 +0000 (0:00:02.186) 0:06:11.712 ****** 2025-02-04 09:36:21.121855 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121861 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121867 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121878 | orchestrator | 2025-02-04 09:36:21.121885 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-02-04 09:36:21.121891 | orchestrator | Tuesday 04 February 2025 09:35:50 +0000 (0:00:02.729) 0:06:14.442 ****** 2025-02-04 09:36:21.121897 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121903 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121909 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121915 | orchestrator | 2025-02-04 09:36:21.121922 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-02-04 09:36:21.121928 | orchestrator | Tuesday 04 February 2025 09:35:50 +0000 (0:00:00.437) 0:06:14.880 ****** 2025-02-04 09:36:21.121934 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.121940 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.121946 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.121952 | orchestrator | 2025-02-04 09:36:21.121958 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-02-04 09:36:21.121967 | orchestrator | Tuesday 04 February 2025 09:35:51 +0000 (0:00:01.440) 0:06:16.320 ****** 2025-02-04 09:36:21.121973 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.121979 | orchestrator | 2025-02-04 09:36:21.121985 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-02-04 09:36:21.121991 | orchestrator | Tuesday 04 February 2025 09:35:53 +0000 (0:00:01.906) 0:06:18.227 ****** 2025-02-04 09:36:21.121997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.122008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-04 09:36:21.122041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.122066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.122072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-04 09:36:21.122083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.122109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-02-04 09:36:21.122116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-04 09:36:21.122122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.122150 | orchestrator | 2025-02-04 09:36:21.122156 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-02-04 09:36:21.122163 | orchestrator | Tuesday 04 February 2025 09:35:57 +0000 (0:00:03.874) 0:06:22.102 ****** 2025-02-04 09:36:21.122171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.122183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-04 09:36:21.122190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.122213 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.122226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-04 09:36:21.122242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.122262 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-api:2024.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-02-04 09:36:21.122278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-driver-agent:2024.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-02-04 09:36:21.122288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-health-manager:2024.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-housekeeping:2024.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-02-04 09:36:21.122304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/octavia-worker:2024.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-02-04 09:36:21.122311 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122317 | orchestrator | 2025-02-04 09:36:21.122323 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-02-04 09:36:21.122330 | orchestrator | Tuesday 04 February 2025 09:35:58 +0000 (0:00:01.032) 0:06:23.134 ****** 2025-02-04 09:36:21.122336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-04 09:36:21.122342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-04 09:36:21.122352 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-04 09:36:21.122364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-04 09:36:21.122370 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-04 09:36:21.122388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-02-04 09:36:21.122395 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122401 | orchestrator | 2025-02-04 09:36:21.122407 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-02-04 09:36:21.122416 | orchestrator | Tuesday 04 February 2025 09:35:59 +0000 (0:00:01.171) 0:06:24.305 ****** 2025-02-04 09:36:21.122427 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122437 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122447 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122457 | orchestrator | 2025-02-04 09:36:21.122467 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-02-04 09:36:21.122477 | orchestrator | Tuesday 04 February 2025 09:36:00 +0000 (0:00:00.434) 0:06:24.740 ****** 2025-02-04 09:36:21.122487 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122497 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122503 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122510 | orchestrator | 2025-02-04 09:36:21.122516 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-02-04 09:36:21.122522 | orchestrator | Tuesday 04 February 2025 09:36:01 +0000 (0:00:01.335) 0:06:26.075 ****** 2025-02-04 09:36:21.122528 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.122534 | orchestrator | 2025-02-04 09:36:21.122539 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-02-04 09:36:21.122545 | orchestrator | Tuesday 04 February 2025 09:36:03 +0000 (0:00:01.795) 0:06:27.871 ****** 2025-02-04 09:36:21.122556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:36:21.122563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:36:21.122574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:36:21.122586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:36:21.122595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:36:21.122602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:36:21.122617 | orchestrator | 2025-02-04 09:36:21.122623 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-02-04 09:36:21.122629 | orchestrator | Tuesday 04 February 2025 09:36:11 +0000 (0:00:07.743) 0:06:35.614 ****** 2025-02-04 09:36:21.122635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:36:21.122642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:36:21.122648 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:36:21.122664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:36:21.122674 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:36:21.122691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:36:21.122698 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122704 | orchestrator | 2025-02-04 09:36:21.122710 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-02-04 09:36:21.122717 | orchestrator | Tuesday 04 February 2025 09:36:12 +0000 (0:00:01.140) 0:06:36.755 ****** 2025-02-04 09:36:21.122723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-04 09:36:21.122729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-04 09:36:21.122735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-04 09:36:21.122742 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-04 09:36:21.122757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-04 09:36:21.122767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-04 09:36:21.122785 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-02-04 09:36:21.122798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-04 09:36:21.122804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-02-04 09:36:21.122813 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122819 | orchestrator | 2025-02-04 09:36:21.122825 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-02-04 09:36:21.122831 | orchestrator | Tuesday 04 February 2025 09:36:14 +0000 (0:00:01.635) 0:06:38.390 ****** 2025-02-04 09:36:21.122837 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122843 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122849 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122855 | orchestrator | 2025-02-04 09:36:21.122861 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-02-04 09:36:21.122867 | orchestrator | Tuesday 04 February 2025 09:36:14 +0000 (0:00:00.333) 0:06:38.724 ****** 2025-02-04 09:36:21.122873 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:36:21.122879 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:36:21.122885 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:36:21.122891 | orchestrator | 2025-02-04 09:36:21.122897 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-02-04 09:36:21.122903 | orchestrator | Tuesday 04 February 2025 09:36:15 +0000 (0:00:01.606) 0:06:40.330 ****** 2025-02-04 09:36:21.122909 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:36:21.122915 | orchestrator | 2025-02-04 09:36:21.122921 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-02-04 09:36:21.122927 | orchestrator | Tuesday 04 February 2025 09:36:17 +0000 (0:00:01.966) 0:06:42.297 ****** 2025-02-04 09:36:21.122954 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "{{ prometheus_services }}: {'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:36:21.122986 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "{{ prometheus_services }}: {'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:36:21.123012 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "{{ prometheus_services }}: {'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:36:24.149008 | orchestrator | 2025-02-04 09:36:24.149128 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:36:24.149151 | orchestrator | testbed-node-0 : ok=71  changed=37  unreachable=0 failed=1  skipped=112  rescued=0 ignored=0 2025-02-04 09:36:24.149168 | orchestrator | testbed-node-1 : ok=70  changed=37  unreachable=0 failed=1  skipped=112  rescued=0 ignored=0 2025-02-04 09:36:24.149183 | orchestrator | testbed-node-2 : ok=70  changed=37  unreachable=0 failed=1  skipped=112  rescued=0 ignored=0 2025-02-04 09:36:24.149197 | orchestrator | 2025-02-04 09:36:24.149211 | orchestrator | 2025-02-04 09:36:24.149226 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:36:24.149240 | orchestrator | Tuesday 04 February 2025 09:36:19 +0000 (0:00:01.133) 0:06:43.431 ****** 2025-02-04 09:36:24.149273 | orchestrator | =============================================================================== 2025-02-04 09:36:24.149289 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 9.81s 2025-02-04 09:36:24.149303 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 9.31s 2025-02-04 09:36:24.149318 | orchestrator | haproxy-config : Copying over ironic haproxy config --------------------- 9.15s 2025-02-04 09:36:24.149332 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 8.99s 2025-02-04 09:36:24.149346 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.74s 2025-02-04 09:36:24.149360 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.26s 2025-02-04 09:36:24.149396 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 7.17s 2025-02-04 09:36:24.149410 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 7.01s 2025-02-04 09:36:24.149424 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.70s 2025-02-04 09:36:24.149438 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.55s 2025-02-04 09:36:24.149452 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.47s 2025-02-04 09:36:24.149466 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.31s 2025-02-04 09:36:24.149479 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 6.26s 2025-02-04 09:36:24.149493 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.20s 2025-02-04 09:36:24.149507 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.81s 2025-02-04 09:36:24.149522 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.73s 2025-02-04 09:36:24.149538 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.40s 2025-02-04 09:36:24.149553 | orchestrator | loadbalancer : Copying over haproxy.pem --------------------------------- 5.38s 2025-02-04 09:36:24.149569 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.29s 2025-02-04 09:36:24.149585 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.93s 2025-02-04 09:36:24.149618 | orchestrator | 2025-02-04 09:36:24 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:24.150957 | orchestrator | 2025-02-04 09:36:24 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:24.152993 | orchestrator | 2025-02-04 09:36:24 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:27.198208 | orchestrator | 2025-02-04 09:36:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:27.198351 | orchestrator | 2025-02-04 09:36:27 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:30.254379 | orchestrator | 2025-02-04 09:36:27 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:30.254515 | orchestrator | 2025-02-04 09:36:27 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:30.254546 | orchestrator | 2025-02-04 09:36:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:30.254596 | orchestrator | 2025-02-04 09:36:30 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:33.281356 | orchestrator | 2025-02-04 09:36:30 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:33.281459 | orchestrator | 2025-02-04 09:36:30 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:33.281474 | orchestrator | 2025-02-04 09:36:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:33.281500 | orchestrator | 2025-02-04 09:36:33 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:33.281881 | orchestrator | 2025-02-04 09:36:33 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:33.281916 | orchestrator | 2025-02-04 09:36:33 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:36.321497 | orchestrator | 2025-02-04 09:36:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:36.321633 | orchestrator | 2025-02-04 09:36:36 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:39.370785 | orchestrator | 2025-02-04 09:36:36 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:39.370906 | orchestrator | 2025-02-04 09:36:36 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:39.370918 | orchestrator | 2025-02-04 09:36:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:39.370940 | orchestrator | 2025-02-04 09:36:39 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:39.371936 | orchestrator | 2025-02-04 09:36:39 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:39.371957 | orchestrator | 2025-02-04 09:36:39 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:42.400387 | orchestrator | 2025-02-04 09:36:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:42.400481 | orchestrator | 2025-02-04 09:36:42 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:42.401129 | orchestrator | 2025-02-04 09:36:42 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:42.401187 | orchestrator | 2025-02-04 09:36:42 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:45.443805 | orchestrator | 2025-02-04 09:36:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:45.444068 | orchestrator | 2025-02-04 09:36:45 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:45.444981 | orchestrator | 2025-02-04 09:36:45 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:45.445118 | orchestrator | 2025-02-04 09:36:45 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:48.482384 | orchestrator | 2025-02-04 09:36:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:48.482523 | orchestrator | 2025-02-04 09:36:48 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:48.483263 | orchestrator | 2025-02-04 09:36:48 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:48.483308 | orchestrator | 2025-02-04 09:36:48 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:51.534403 | orchestrator | 2025-02-04 09:36:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:51.534544 | orchestrator | 2025-02-04 09:36:51 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:51.538894 | orchestrator | 2025-02-04 09:36:51 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:51.541735 | orchestrator | 2025-02-04 09:36:51 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:54.574221 | orchestrator | 2025-02-04 09:36:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:54.574343 | orchestrator | 2025-02-04 09:36:54 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:54.575397 | orchestrator | 2025-02-04 09:36:54 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:36:54.578068 | orchestrator | 2025-02-04 09:36:54 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:36:57.622481 | orchestrator | 2025-02-04 09:36:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:36:57.622588 | orchestrator | 2025-02-04 09:36:57 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:36:57.627027 | orchestrator | 2025-02-04 09:36:57 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:00.655922 | orchestrator | 2025-02-04 09:36:57 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:00.656070 | orchestrator | 2025-02-04 09:36:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:00.656217 | orchestrator | 2025-02-04 09:37:00 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:00.656544 | orchestrator | 2025-02-04 09:37:00 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:00.656596 | orchestrator | 2025-02-04 09:37:00 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:03.683716 | orchestrator | 2025-02-04 09:37:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:03.683947 | orchestrator | 2025-02-04 09:37:03 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:03.683979 | orchestrator | 2025-02-04 09:37:03 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:03.684766 | orchestrator | 2025-02-04 09:37:03 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:06.712109 | orchestrator | 2025-02-04 09:37:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:06.712267 | orchestrator | 2025-02-04 09:37:06 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:06.713139 | orchestrator | 2025-02-04 09:37:06 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:06.713207 | orchestrator | 2025-02-04 09:37:06 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:09.742723 | orchestrator | 2025-02-04 09:37:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:09.742862 | orchestrator | 2025-02-04 09:37:09 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:09.743887 | orchestrator | 2025-02-04 09:37:09 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:09.745699 | orchestrator | 2025-02-04 09:37:09 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:12.784177 | orchestrator | 2025-02-04 09:37:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:12.784350 | orchestrator | 2025-02-04 09:37:12 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:12.786956 | orchestrator | 2025-02-04 09:37:12 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:12.787072 | orchestrator | 2025-02-04 09:37:12 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:15.816582 | orchestrator | 2025-02-04 09:37:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:15.816702 | orchestrator | 2025-02-04 09:37:15 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:15.818298 | orchestrator | 2025-02-04 09:37:15 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:15.818858 | orchestrator | 2025-02-04 09:37:15 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:18.859768 | orchestrator | 2025-02-04 09:37:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:18.859892 | orchestrator | 2025-02-04 09:37:18 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:18.860936 | orchestrator | 2025-02-04 09:37:18 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:18.862844 | orchestrator | 2025-02-04 09:37:18 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:21.898483 | orchestrator | 2025-02-04 09:37:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:21.898688 | orchestrator | 2025-02-04 09:37:21 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:21.898987 | orchestrator | 2025-02-04 09:37:21 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:21.899476 | orchestrator | 2025-02-04 09:37:21 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:21.899858 | orchestrator | 2025-02-04 09:37:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:24.931341 | orchestrator | 2025-02-04 09:37:24 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:27.971789 | orchestrator | 2025-02-04 09:37:24 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:27.972057 | orchestrator | 2025-02-04 09:37:24 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:27.972083 | orchestrator | 2025-02-04 09:37:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:27.972116 | orchestrator | 2025-02-04 09:37:27 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:31.017948 | orchestrator | 2025-02-04 09:37:27 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:31.018119 | orchestrator | 2025-02-04 09:37:27 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:31.018153 | orchestrator | 2025-02-04 09:37:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:31.018201 | orchestrator | 2025-02-04 09:37:31 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:31.019072 | orchestrator | 2025-02-04 09:37:31 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:31.020160 | orchestrator | 2025-02-04 09:37:31 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:34.085472 | orchestrator | 2025-02-04 09:37:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:34.085605 | orchestrator | 2025-02-04 09:37:34 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:37.131531 | orchestrator | 2025-02-04 09:37:34 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:37.131606 | orchestrator | 2025-02-04 09:37:34 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:37.131621 | orchestrator | 2025-02-04 09:37:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:37.131662 | orchestrator | 2025-02-04 09:37:37 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:37.132348 | orchestrator | 2025-02-04 09:37:37 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:37.132381 | orchestrator | 2025-02-04 09:37:37 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:40.174784 | orchestrator | 2025-02-04 09:37:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:40.174918 | orchestrator | 2025-02-04 09:37:40 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:40.175504 | orchestrator | 2025-02-04 09:37:40 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:40.175743 | orchestrator | 2025-02-04 09:37:40 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:43.216163 | orchestrator | 2025-02-04 09:37:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:43.216494 | orchestrator | 2025-02-04 09:37:43 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:43.218248 | orchestrator | 2025-02-04 09:37:43 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:43.218354 | orchestrator | 2025-02-04 09:37:43 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:46.282288 | orchestrator | 2025-02-04 09:37:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:46.282432 | orchestrator | 2025-02-04 09:37:46 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:46.283336 | orchestrator | 2025-02-04 09:37:46 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:46.284835 | orchestrator | 2025-02-04 09:37:46 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:49.318775 | orchestrator | 2025-02-04 09:37:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:49.318913 | orchestrator | 2025-02-04 09:37:49 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:49.321125 | orchestrator | 2025-02-04 09:37:49 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:49.322214 | orchestrator | 2025-02-04 09:37:49 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:52.369864 | orchestrator | 2025-02-04 09:37:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:52.370009 | orchestrator | 2025-02-04 09:37:52 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:52.372433 | orchestrator | 2025-02-04 09:37:52 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:52.377609 | orchestrator | 2025-02-04 09:37:52 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:55.417090 | orchestrator | 2025-02-04 09:37:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:55.417226 | orchestrator | 2025-02-04 09:37:55 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:55.417596 | orchestrator | 2025-02-04 09:37:55 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:55.417623 | orchestrator | 2025-02-04 09:37:55 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:37:58.461043 | orchestrator | 2025-02-04 09:37:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:37:58.461192 | orchestrator | 2025-02-04 09:37:58 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:37:58.462431 | orchestrator | 2025-02-04 09:37:58 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:37:58.463796 | orchestrator | 2025-02-04 09:37:58 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:01.509723 | orchestrator | 2025-02-04 09:37:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:01.509863 | orchestrator | 2025-02-04 09:38:01 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:01.511368 | orchestrator | 2025-02-04 09:38:01 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:04.550714 | orchestrator | 2025-02-04 09:38:01 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:04.550846 | orchestrator | 2025-02-04 09:38:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:04.550885 | orchestrator | 2025-02-04 09:38:04 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:04.551577 | orchestrator | 2025-02-04 09:38:04 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:04.553208 | orchestrator | 2025-02-04 09:38:04 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:04.553390 | orchestrator | 2025-02-04 09:38:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:07.592895 | orchestrator | 2025-02-04 09:38:07 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:07.593676 | orchestrator | 2025-02-04 09:38:07 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:07.595338 | orchestrator | 2025-02-04 09:38:07 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:10.636299 | orchestrator | 2025-02-04 09:38:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:10.636443 | orchestrator | 2025-02-04 09:38:10 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:10.636769 | orchestrator | 2025-02-04 09:38:10 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:10.638204 | orchestrator | 2025-02-04 09:38:10 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:13.678402 | orchestrator | 2025-02-04 09:38:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:13.678571 | orchestrator | 2025-02-04 09:38:13 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:13.679049 | orchestrator | 2025-02-04 09:38:13 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:13.680336 | orchestrator | 2025-02-04 09:38:13 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:16.720578 | orchestrator | 2025-02-04 09:38:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:16.720710 | orchestrator | 2025-02-04 09:38:16 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:16.722186 | orchestrator | 2025-02-04 09:38:16 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:16.725230 | orchestrator | 2025-02-04 09:38:16 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:16.726597 | orchestrator | 2025-02-04 09:38:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:19.758229 | orchestrator | 2025-02-04 09:38:19 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:19.759262 | orchestrator | 2025-02-04 09:38:19 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:19.759319 | orchestrator | 2025-02-04 09:38:19 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:22.787941 | orchestrator | 2025-02-04 09:38:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:22.788112 | orchestrator | 2025-02-04 09:38:22 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:22.788556 | orchestrator | 2025-02-04 09:38:22 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:22.789202 | orchestrator | 2025-02-04 09:38:22 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:22.789420 | orchestrator | 2025-02-04 09:38:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:25.814972 | orchestrator | 2025-02-04 09:38:25 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:25.815619 | orchestrator | 2025-02-04 09:38:25 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:25.816325 | orchestrator | 2025-02-04 09:38:25 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:28.855200 | orchestrator | 2025-02-04 09:38:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:28.855393 | orchestrator | 2025-02-04 09:38:28 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:28.855954 | orchestrator | 2025-02-04 09:38:28 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:28.855993 | orchestrator | 2025-02-04 09:38:28 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:31.889853 | orchestrator | 2025-02-04 09:38:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:31.890178 | orchestrator | 2025-02-04 09:38:31 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:34.917012 | orchestrator | 2025-02-04 09:38:31 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:34.917143 | orchestrator | 2025-02-04 09:38:31 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:34.917164 | orchestrator | 2025-02-04 09:38:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:34.917198 | orchestrator | 2025-02-04 09:38:34 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:34.917671 | orchestrator | 2025-02-04 09:38:34 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:34.917709 | orchestrator | 2025-02-04 09:38:34 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:37.951659 | orchestrator | 2025-02-04 09:38:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:37.951797 | orchestrator | 2025-02-04 09:38:37 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:37.952677 | orchestrator | 2025-02-04 09:38:37 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:37.952716 | orchestrator | 2025-02-04 09:38:37 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:40.990784 | orchestrator | 2025-02-04 09:38:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:40.990930 | orchestrator | 2025-02-04 09:38:40 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:40.991339 | orchestrator | 2025-02-04 09:38:40 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:40.991377 | orchestrator | 2025-02-04 09:38:40 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:44.038216 | orchestrator | 2025-02-04 09:38:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:44.038352 | orchestrator | 2025-02-04 09:38:44 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:44.039420 | orchestrator | 2025-02-04 09:38:44 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:44.040795 | orchestrator | 2025-02-04 09:38:44 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:47.079982 | orchestrator | 2025-02-04 09:38:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:47.080102 | orchestrator | 2025-02-04 09:38:47 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:50.114487 | orchestrator | 2025-02-04 09:38:47 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:50.114659 | orchestrator | 2025-02-04 09:38:47 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:50.114691 | orchestrator | 2025-02-04 09:38:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:50.114729 | orchestrator | 2025-02-04 09:38:50 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:53.150790 | orchestrator | 2025-02-04 09:38:50 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:53.151034 | orchestrator | 2025-02-04 09:38:50 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:53.151078 | orchestrator | 2025-02-04 09:38:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:53.151112 | orchestrator | 2025-02-04 09:38:53 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:53.151810 | orchestrator | 2025-02-04 09:38:53 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:53.151857 | orchestrator | 2025-02-04 09:38:53 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:56.179868 | orchestrator | 2025-02-04 09:38:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:56.180124 | orchestrator | 2025-02-04 09:38:56 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:56.180460 | orchestrator | 2025-02-04 09:38:56 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:56.180598 | orchestrator | 2025-02-04 09:38:56 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:38:59.224112 | orchestrator | 2025-02-04 09:38:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:38:59.224239 | orchestrator | 2025-02-04 09:38:59 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:38:59.227642 | orchestrator | 2025-02-04 09:38:59 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:38:59.229929 | orchestrator | 2025-02-04 09:38:59 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:02.271249 | orchestrator | 2025-02-04 09:38:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:02.271484 | orchestrator | 2025-02-04 09:39:02 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:02.272286 | orchestrator | 2025-02-04 09:39:02 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:02.272318 | orchestrator | 2025-02-04 09:39:02 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:05.309205 | orchestrator | 2025-02-04 09:39:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:05.309359 | orchestrator | 2025-02-04 09:39:05 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:05.312493 | orchestrator | 2025-02-04 09:39:05 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:05.313568 | orchestrator | 2025-02-04 09:39:05 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:08.367334 | orchestrator | 2025-02-04 09:39:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:08.367708 | orchestrator | 2025-02-04 09:39:08 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:08.368722 | orchestrator | 2025-02-04 09:39:08 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:08.368769 | orchestrator | 2025-02-04 09:39:08 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:11.413524 | orchestrator | 2025-02-04 09:39:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:11.413782 | orchestrator | 2025-02-04 09:39:11 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:11.415548 | orchestrator | 2025-02-04 09:39:11 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:11.415662 | orchestrator | 2025-02-04 09:39:11 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:14.452174 | orchestrator | 2025-02-04 09:39:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:14.452314 | orchestrator | 2025-02-04 09:39:14 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:14.452560 | orchestrator | 2025-02-04 09:39:14 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:14.453623 | orchestrator | 2025-02-04 09:39:14 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:17.504017 | orchestrator | 2025-02-04 09:39:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:17.504248 | orchestrator | 2025-02-04 09:39:17 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:17.506207 | orchestrator | 2025-02-04 09:39:17 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:17.506847 | orchestrator | 2025-02-04 09:39:17 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:20.552925 | orchestrator | 2025-02-04 09:39:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:20.553076 | orchestrator | 2025-02-04 09:39:20 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:20.554228 | orchestrator | 2025-02-04 09:39:20 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:20.554261 | orchestrator | 2025-02-04 09:39:20 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:23.599968 | orchestrator | 2025-02-04 09:39:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:23.600114 | orchestrator | 2025-02-04 09:39:23 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:23.602140 | orchestrator | 2025-02-04 09:39:23 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:23.604140 | orchestrator | 2025-02-04 09:39:23 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:26.645184 | orchestrator | 2025-02-04 09:39:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:26.645305 | orchestrator | 2025-02-04 09:39:26 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:26.646247 | orchestrator | 2025-02-04 09:39:26 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:26.646282 | orchestrator | 2025-02-04 09:39:26 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:29.697871 | orchestrator | 2025-02-04 09:39:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:29.698173 | orchestrator | 2025-02-04 09:39:29 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:29.698214 | orchestrator | 2025-02-04 09:39:29 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:29.699387 | orchestrator | 2025-02-04 09:39:29 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:32.744971 | orchestrator | 2025-02-04 09:39:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:32.745117 | orchestrator | 2025-02-04 09:39:32 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:32.745485 | orchestrator | 2025-02-04 09:39:32 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:32.745523 | orchestrator | 2025-02-04 09:39:32 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:32.745666 | orchestrator | 2025-02-04 09:39:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:35.794413 | orchestrator | 2025-02-04 09:39:35 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:35.794978 | orchestrator | 2025-02-04 09:39:35 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:35.795913 | orchestrator | 2025-02-04 09:39:35 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:38.840612 | orchestrator | 2025-02-04 09:39:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:38.840781 | orchestrator | 2025-02-04 09:39:38 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:38.841486 | orchestrator | 2025-02-04 09:39:38 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:38.843771 | orchestrator | 2025-02-04 09:39:38 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:41.887876 | orchestrator | 2025-02-04 09:39:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:41.888021 | orchestrator | 2025-02-04 09:39:41 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:41.888845 | orchestrator | 2025-02-04 09:39:41 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:41.890764 | orchestrator | 2025-02-04 09:39:41 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:44.934566 | orchestrator | 2025-02-04 09:39:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:44.934690 | orchestrator | 2025-02-04 09:39:44 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:44.935767 | orchestrator | 2025-02-04 09:39:44 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:44.935803 | orchestrator | 2025-02-04 09:39:44 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:47.976302 | orchestrator | 2025-02-04 09:39:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:47.976663 | orchestrator | 2025-02-04 09:39:47 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:47.977536 | orchestrator | 2025-02-04 09:39:47 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:47.977576 | orchestrator | 2025-02-04 09:39:47 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:51.022285 | orchestrator | 2025-02-04 09:39:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:51.022480 | orchestrator | 2025-02-04 09:39:51 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:51.023737 | orchestrator | 2025-02-04 09:39:51 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:51.023787 | orchestrator | 2025-02-04 09:39:51 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:54.078990 | orchestrator | 2025-02-04 09:39:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:54.079172 | orchestrator | 2025-02-04 09:39:54 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:54.081002 | orchestrator | 2025-02-04 09:39:54 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:54.084663 | orchestrator | 2025-02-04 09:39:54 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:39:57.128062 | orchestrator | 2025-02-04 09:39:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:39:57.128184 | orchestrator | 2025-02-04 09:39:57 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:39:57.128410 | orchestrator | 2025-02-04 09:39:57 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:39:57.129490 | orchestrator | 2025-02-04 09:39:57 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:00.175363 | orchestrator | 2025-02-04 09:39:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:00.175512 | orchestrator | 2025-02-04 09:40:00 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:00.176077 | orchestrator | 2025-02-04 09:40:00 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:00.177475 | orchestrator | 2025-02-04 09:40:00 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:03.218662 | orchestrator | 2025-02-04 09:40:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:03.218778 | orchestrator | 2025-02-04 09:40:03 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:03.219986 | orchestrator | 2025-02-04 09:40:03 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:03.221383 | orchestrator | 2025-02-04 09:40:03 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:06.265467 | orchestrator | 2025-02-04 09:40:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:06.265767 | orchestrator | 2025-02-04 09:40:06 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:06.266773 | orchestrator | 2025-02-04 09:40:06 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:06.266809 | orchestrator | 2025-02-04 09:40:06 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:09.306667 | orchestrator | 2025-02-04 09:40:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:09.306808 | orchestrator | 2025-02-04 09:40:09 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:09.308223 | orchestrator | 2025-02-04 09:40:09 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:09.309520 | orchestrator | 2025-02-04 09:40:09 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:12.351659 | orchestrator | 2025-02-04 09:40:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:12.351830 | orchestrator | 2025-02-04 09:40:12 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:12.352677 | orchestrator | 2025-02-04 09:40:12 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:12.352718 | orchestrator | 2025-02-04 09:40:12 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:15.382565 | orchestrator | 2025-02-04 09:40:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:15.382708 | orchestrator | 2025-02-04 09:40:15 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:15.383079 | orchestrator | 2025-02-04 09:40:15 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:15.383989 | orchestrator | 2025-02-04 09:40:15 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:18.424073 | orchestrator | 2025-02-04 09:40:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:18.424238 | orchestrator | 2025-02-04 09:40:18 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:18.424950 | orchestrator | 2025-02-04 09:40:18 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:18.425028 | orchestrator | 2025-02-04 09:40:18 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:21.469131 | orchestrator | 2025-02-04 09:40:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:21.469335 | orchestrator | 2025-02-04 09:40:21 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:21.469777 | orchestrator | 2025-02-04 09:40:21 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:21.471038 | orchestrator | 2025-02-04 09:40:21 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:21.471317 | orchestrator | 2025-02-04 09:40:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:24.515781 | orchestrator | 2025-02-04 09:40:24 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:24.516398 | orchestrator | 2025-02-04 09:40:24 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:24.517643 | orchestrator | 2025-02-04 09:40:24 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:27.562545 | orchestrator | 2025-02-04 09:40:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:27.562713 | orchestrator | 2025-02-04 09:40:27 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:27.564353 | orchestrator | 2025-02-04 09:40:27 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:27.565804 | orchestrator | 2025-02-04 09:40:27 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:30.616541 | orchestrator | 2025-02-04 09:40:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:30.616687 | orchestrator | 2025-02-04 09:40:30 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:30.617643 | orchestrator | 2025-02-04 09:40:30 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:30.620126 | orchestrator | 2025-02-04 09:40:30 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:30.620355 | orchestrator | 2025-02-04 09:40:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:33.658214 | orchestrator | 2025-02-04 09:40:33 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:33.659396 | orchestrator | 2025-02-04 09:40:33 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:33.660446 | orchestrator | 2025-02-04 09:40:33 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:36.700765 | orchestrator | 2025-02-04 09:40:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:36.700912 | orchestrator | 2025-02-04 09:40:36 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:36.701993 | orchestrator | 2025-02-04 09:40:36 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:39.730639 | orchestrator | 2025-02-04 09:40:36 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:39.730799 | orchestrator | 2025-02-04 09:40:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:39.730856 | orchestrator | 2025-02-04 09:40:39 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:39.731657 | orchestrator | 2025-02-04 09:40:39 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:42.762905 | orchestrator | 2025-02-04 09:40:39 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:42.763036 | orchestrator | 2025-02-04 09:40:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:42.763075 | orchestrator | 2025-02-04 09:40:42 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:42.763838 | orchestrator | 2025-02-04 09:40:42 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:42.765327 | orchestrator | 2025-02-04 09:40:42 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:45.800910 | orchestrator | 2025-02-04 09:40:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:45.801010 | orchestrator | 2025-02-04 09:40:45 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:45.801143 | orchestrator | 2025-02-04 09:40:45 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:45.803045 | orchestrator | 2025-02-04 09:40:45 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:48.834550 | orchestrator | 2025-02-04 09:40:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:48.834698 | orchestrator | 2025-02-04 09:40:48 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:48.836433 | orchestrator | 2025-02-04 09:40:48 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:48.837452 | orchestrator | 2025-02-04 09:40:48 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:48.837595 | orchestrator | 2025-02-04 09:40:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:51.877397 | orchestrator | 2025-02-04 09:40:51 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:51.878434 | orchestrator | 2025-02-04 09:40:51 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:51.879728 | orchestrator | 2025-02-04 09:40:51 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:54.918419 | orchestrator | 2025-02-04 09:40:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:54.918598 | orchestrator | 2025-02-04 09:40:54 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:54.919387 | orchestrator | 2025-02-04 09:40:54 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:54.919451 | orchestrator | 2025-02-04 09:40:54 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:40:57.959915 | orchestrator | 2025-02-04 09:40:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:40:57.960052 | orchestrator | 2025-02-04 09:40:57 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:40:57.960134 | orchestrator | 2025-02-04 09:40:57 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:40:57.961760 | orchestrator | 2025-02-04 09:40:57 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:01.002380 | orchestrator | 2025-02-04 09:40:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:01.002529 | orchestrator | 2025-02-04 09:41:00 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:01.003217 | orchestrator | 2025-02-04 09:41:00 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:01.003259 | orchestrator | 2025-02-04 09:41:00 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:04.051111 | orchestrator | 2025-02-04 09:41:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:04.051314 | orchestrator | 2025-02-04 09:41:04 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:04.052162 | orchestrator | 2025-02-04 09:41:04 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:04.052261 | orchestrator | 2025-02-04 09:41:04 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:07.098707 | orchestrator | 2025-02-04 09:41:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:07.098870 | orchestrator | 2025-02-04 09:41:07 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:07.100564 | orchestrator | 2025-02-04 09:41:07 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:07.101628 | orchestrator | 2025-02-04 09:41:07 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:10.139334 | orchestrator | 2025-02-04 09:41:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:10.139457 | orchestrator | 2025-02-04 09:41:10 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:10.139547 | orchestrator | 2025-02-04 09:41:10 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:10.140740 | orchestrator | 2025-02-04 09:41:10 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:13.185301 | orchestrator | 2025-02-04 09:41:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:13.185430 | orchestrator | 2025-02-04 09:41:13 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:13.186064 | orchestrator | 2025-02-04 09:41:13 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:13.187317 | orchestrator | 2025-02-04 09:41:13 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:16.219352 | orchestrator | 2025-02-04 09:41:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:16.219574 | orchestrator | 2025-02-04 09:41:16 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:16.219671 | orchestrator | 2025-02-04 09:41:16 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:16.220596 | orchestrator | 2025-02-04 09:41:16 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:19.258140 | orchestrator | 2025-02-04 09:41:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:19.258312 | orchestrator | 2025-02-04 09:41:19 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state STARTED 2025-02-04 09:41:19.258791 | orchestrator | 2025-02-04 09:41:19 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:19.259882 | orchestrator | 2025-02-04 09:41:19 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:22.317111 | orchestrator | 2025-02-04 09:41:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:22.317314 | orchestrator | 2025-02-04 09:41:22 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:22.321683 | orchestrator | 2025-02-04 09:41:22.321779 | orchestrator | 2025-02-04 09:41:22.321800 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-02-04 09:41:22.321816 | orchestrator | 2025-02-04 09:41:22.321831 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-02-04 09:41:22.321848 | orchestrator | Tuesday 04 February 2025 09:36:23 +0000 (0:00:00.097) 0:00:00.097 ****** 2025-02-04 09:41:22.321872 | orchestrator | ok: [localhost] => { 2025-02-04 09:41:22.321891 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-02-04 09:41:22.321906 | orchestrator | } 2025-02-04 09:41:22.321920 | orchestrator | 2025-02-04 09:41:22.321935 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-02-04 09:41:22.321949 | orchestrator | Tuesday 04 February 2025 09:36:23 +0000 (0:00:00.043) 0:00:00.141 ****** 2025-02-04 09:41:22.321964 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-02-04 09:41:22.321979 | orchestrator | ...ignoring 2025-02-04 09:41:22.321993 | orchestrator | 2025-02-04 09:41:22.322008 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-02-04 09:41:22.322056 | orchestrator | Tuesday 04 February 2025 09:36:25 +0000 (0:00:02.576) 0:00:02.717 ****** 2025-02-04 09:41:22.322073 | orchestrator | skipping: [localhost] 2025-02-04 09:41:22.322088 | orchestrator | 2025-02-04 09:41:22.322102 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-02-04 09:41:22.322116 | orchestrator | Tuesday 04 February 2025 09:36:25 +0000 (0:00:00.062) 0:00:02.779 ****** 2025-02-04 09:41:22.322130 | orchestrator | ok: [localhost] 2025-02-04 09:41:22.322144 | orchestrator | 2025-02-04 09:41:22.322181 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:41:22.322196 | orchestrator | 2025-02-04 09:41:22.322227 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:41:22.322242 | orchestrator | Tuesday 04 February 2025 09:36:26 +0000 (0:00:00.161) 0:00:02.941 ****** 2025-02-04 09:41:22.322258 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.322273 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.322289 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.322305 | orchestrator | 2025-02-04 09:41:22.322322 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:41:22.322339 | orchestrator | Tuesday 04 February 2025 09:36:26 +0000 (0:00:00.469) 0:00:03.411 ****** 2025-02-04 09:41:22.322355 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-02-04 09:41:22.322371 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-02-04 09:41:22.322386 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-02-04 09:41:22.322402 | orchestrator | 2025-02-04 09:41:22.322417 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-02-04 09:41:22.322433 | orchestrator | 2025-02-04 09:41:22.322449 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-02-04 09:41:22.322465 | orchestrator | Tuesday 04 February 2025 09:36:27 +0000 (0:00:00.623) 0:00:04.035 ****** 2025-02-04 09:41:22.322482 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:41:22.322498 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-04 09:41:22.322513 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-04 09:41:22.322529 | orchestrator | 2025-02-04 09:41:22.322545 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-04 09:41:22.322561 | orchestrator | Tuesday 04 February 2025 09:36:27 +0000 (0:00:00.469) 0:00:04.504 ****** 2025-02-04 09:41:22.322597 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:22.322616 | orchestrator | 2025-02-04 09:41:22.322631 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-02-04 09:41:22.322646 | orchestrator | Tuesday 04 February 2025 09:36:28 +0000 (0:00:00.905) 0:00:05.410 ****** 2025-02-04 09:41:22.322682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.322701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.322726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.322751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.322768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.322783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.322798 | orchestrator | 2025-02-04 09:41:22.322813 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-02-04 09:41:22.322827 | orchestrator | Tuesday 04 February 2025 09:36:32 +0000 (0:00:04.334) 0:00:09.744 ****** 2025-02-04 09:41:22.322842 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.322857 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.322877 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.322891 | orchestrator | 2025-02-04 09:41:22.322906 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-02-04 09:41:22.322931 | orchestrator | Tuesday 04 February 2025 09:36:33 +0000 (0:00:00.541) 0:00:10.285 ****** 2025-02-04 09:41:22.322946 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.322960 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.322974 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.322988 | orchestrator | 2025-02-04 09:41:22.323002 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-02-04 09:41:22.323016 | orchestrator | Tuesday 04 February 2025 09:36:35 +0000 (0:00:01.687) 0:00:11.972 ****** 2025-02-04 09:41:22.323039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.323055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.323079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.323102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.323119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.323134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.323148 | orchestrator | 2025-02-04 09:41:22.323180 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-02-04 09:41:22.323202 | orchestrator | Tuesday 04 February 2025 09:36:40 +0000 (0:00:05.745) 0:00:17.718 ****** 2025-02-04 09:41:22.323216 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.323230 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.323245 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.323259 | orchestrator | 2025-02-04 09:41:22.323273 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-02-04 09:41:22.323287 | orchestrator | Tuesday 04 February 2025 09:36:41 +0000 (0:00:01.001) 0:00:18.719 ****** 2025-02-04 09:41:22.323309 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:22.323333 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:22.323356 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.323379 | orchestrator | 2025-02-04 09:41:22.323401 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-02-04 09:41:22.323424 | orchestrator | Tuesday 04 February 2025 09:36:49 +0000 (0:00:07.773) 0:00:26.492 ****** 2025-02-04 09:41:22.323447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.323489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.323528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-server:2024.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-02-04 09:41:22.323555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.323579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.323595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/mariadb-clustercheck:2024.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'haproxy', 'MYSQL_PASSWORD': '', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-02-04 09:41:22.323617 | orchestrator | 2025-02-04 09:41:22.323631 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-02-04 09:41:22.323646 | orchestrator | Tuesday 04 February 2025 09:36:55 +0000 (0:00:05.837) 0:00:32.329 ****** 2025-02-04 09:41:22.323660 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.323674 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:22.323688 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:22.323710 | orchestrator | 2025-02-04 09:41:22.323724 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-02-04 09:41:22.323739 | orchestrator | Tuesday 04 February 2025 09:36:56 +0000 (0:00:01.073) 0:00:33.403 ****** 2025-02-04 09:41:22.323753 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.323768 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.323782 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.323796 | orchestrator | 2025-02-04 09:41:22.323810 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-02-04 09:41:22.323824 | orchestrator | Tuesday 04 February 2025 09:36:57 +0000 (0:00:00.506) 0:00:33.910 ****** 2025-02-04 09:41:22.323838 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.323852 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.323867 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.323881 | orchestrator | 2025-02-04 09:41:22.323895 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-02-04 09:41:22.323909 | orchestrator | Tuesday 04 February 2025 09:36:57 +0000 (0:00:00.507) 0:00:34.417 ****** 2025-02-04 09:41:22.323923 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-02-04 09:41:22.323937 | orchestrator | ...ignoring 2025-02-04 09:41:22.323952 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-02-04 09:41:22.323966 | orchestrator | ...ignoring 2025-02-04 09:41:22.323980 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-02-04 09:41:22.323994 | orchestrator | ...ignoring 2025-02-04 09:41:22.324008 | orchestrator | 2025-02-04 09:41:22.324023 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-02-04 09:41:22.324041 | orchestrator | Tuesday 04 February 2025 09:37:08 +0000 (0:00:11.234) 0:00:45.651 ****** 2025-02-04 09:41:22.324056 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.324070 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.324084 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.324098 | orchestrator | 2025-02-04 09:41:22.324112 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-02-04 09:41:22.324126 | orchestrator | Tuesday 04 February 2025 09:37:09 +0000 (0:00:00.506) 0:00:46.158 ****** 2025-02-04 09:41:22.324140 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.324154 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.324240 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.324255 | orchestrator | 2025-02-04 09:41:22.324267 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-02-04 09:41:22.324280 | orchestrator | Tuesday 04 February 2025 09:37:09 +0000 (0:00:00.499) 0:00:46.658 ****** 2025-02-04 09:41:22.324292 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.324305 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.324317 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.324329 | orchestrator | 2025-02-04 09:41:22.324342 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-02-04 09:41:22.324354 | orchestrator | Tuesday 04 February 2025 09:37:10 +0000 (0:00:00.476) 0:00:47.135 ****** 2025-02-04 09:41:22.324367 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.324386 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.324398 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.324411 | orchestrator | 2025-02-04 09:41:22.324424 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-02-04 09:41:22.324436 | orchestrator | Tuesday 04 February 2025 09:37:10 +0000 (0:00:00.399) 0:00:47.534 ****** 2025-02-04 09:41:22.324448 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.324461 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.324474 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.324486 | orchestrator | 2025-02-04 09:41:22.324499 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-02-04 09:41:22.324521 | orchestrator | Tuesday 04 February 2025 09:37:11 +0000 (0:00:00.571) 0:00:48.105 ****** 2025-02-04 09:41:22.324542 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.324571 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.324592 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.324612 | orchestrator | 2025-02-04 09:41:22.324633 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-04 09:41:22.324653 | orchestrator | Tuesday 04 February 2025 09:37:11 +0000 (0:00:00.521) 0:00:48.627 ****** 2025-02-04 09:41:22.324673 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.324695 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.324716 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-02-04 09:41:22.324737 | orchestrator | 2025-02-04 09:41:22.324759 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-02-04 09:41:22.324782 | orchestrator | Tuesday 04 February 2025 09:37:12 +0000 (0:00:00.447) 0:00:49.075 ****** 2025-02-04 09:41:22.324803 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.324825 | orchestrator | 2025-02-04 09:41:22.324839 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-02-04 09:41:22.324851 | orchestrator | Tuesday 04 February 2025 09:37:24 +0000 (0:00:12.036) 0:01:01.111 ****** 2025-02-04 09:41:22.324864 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.324876 | orchestrator | 2025-02-04 09:41:22.324889 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-02-04 09:41:22.324901 | orchestrator | Tuesday 04 February 2025 09:37:24 +0000 (0:00:00.109) 0:01:01.221 ****** 2025-02-04 09:41:22.324914 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.324926 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.324939 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.324952 | orchestrator | 2025-02-04 09:41:22.324964 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-02-04 09:41:22.324976 | orchestrator | Tuesday 04 February 2025 09:37:25 +0000 (0:00:01.105) 0:01:02.326 ****** 2025-02-04 09:41:22.324989 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.325001 | orchestrator | 2025-02-04 09:41:22.325014 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-02-04 09:41:22.325026 | orchestrator | Tuesday 04 February 2025 09:37:36 +0000 (0:00:10.561) 0:01:12.887 ****** 2025-02-04 09:41:22.325039 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-02-04 09:41:22.325052 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.325064 | orchestrator | 2025-02-04 09:41:22.325077 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-02-04 09:41:22.325090 | orchestrator | Tuesday 04 February 2025 09:37:43 +0000 (0:00:07.317) 0:01:20.204 ****** 2025-02-04 09:41:22.325102 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.325115 | orchestrator | 2025-02-04 09:41:22.325127 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-02-04 09:41:22.325140 | orchestrator | Tuesday 04 February 2025 09:37:46 +0000 (0:00:03.537) 0:01:23.742 ****** 2025-02-04 09:41:22.325152 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.325185 | orchestrator | 2025-02-04 09:41:22.325198 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-02-04 09:41:22.325219 | orchestrator | Tuesday 04 February 2025 09:37:46 +0000 (0:00:00.104) 0:01:23.846 ****** 2025-02-04 09:41:22.325232 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.325245 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.325257 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.325270 | orchestrator | 2025-02-04 09:41:22.325282 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-02-04 09:41:22.325295 | orchestrator | Tuesday 04 February 2025 09:37:47 +0000 (0:00:00.422) 0:01:24.269 ****** 2025-02-04 09:41:22.325307 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:22.325320 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:22.325333 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:22.325345 | orchestrator | 2025-02-04 09:41:22.325364 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-02-04 09:41:22.325378 | orchestrator | Tuesday 04 February 2025 09:37:47 +0000 (0:00:00.411) 0:01:24.681 ****** 2025-02-04 09:41:22.325390 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-02-04 09:41:22.325403 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.325415 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:22.325428 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:22.325441 | orchestrator | 2025-02-04 09:41:22.325453 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-02-04 09:41:22.325465 | orchestrator | skipping: no hosts matched 2025-02-04 09:41:22.325478 | orchestrator | 2025-02-04 09:41:22.325491 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-04 09:41:22.325503 | orchestrator | 2025-02-04 09:41:22.325516 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-04 09:41:22.325529 | orchestrator | Tuesday 04 February 2025 09:38:04 +0000 (0:00:16.564) 0:01:41.245 ****** 2025-02-04 09:41:22.325541 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:22.325554 | orchestrator | 2025-02-04 09:41:22.325566 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-04 09:41:22.325579 | orchestrator | Tuesday 04 February 2025 09:38:23 +0000 (0:00:19.360) 0:02:00.605 ****** 2025-02-04 09:41:22.325591 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.325604 | orchestrator | 2025-02-04 09:41:22.325616 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-04 09:41:22.325629 | orchestrator | Tuesday 04 February 2025 09:38:38 +0000 (0:00:14.570) 0:02:15.176 ****** 2025-02-04 09:41:22.325641 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:22.325654 | orchestrator | 2025-02-04 09:41:22.325667 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-02-04 09:41:22.325679 | orchestrator | 2025-02-04 09:41:22.325692 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-04 09:41:22.325704 | orchestrator | Tuesday 04 February 2025 09:38:40 +0000 (0:00:02.658) 0:02:17.835 ****** 2025-02-04 09:41:22.325717 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:22.325729 | orchestrator | 2025-02-04 09:41:22.325742 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-04 09:41:22.325764 | orchestrator | Tuesday 04 February 2025 09:38:56 +0000 (0:00:15.051) 0:02:32.887 *2025-02-04 09:41:22 | INFO  | Task d6a43cdf-3552-40d2-be53-ab531cfa2f71 is in state SUCCESS 2025-02-04 09:41:22.325778 | orchestrator | 2025-02-04 09:41:22 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state STARTED 2025-02-04 09:41:22.325791 | orchestrator | 2025-02-04 09:41:22 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:22.325804 | orchestrator | 2025-02-04 09:41:22 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:22.325816 | orchestrator | 2025-02-04 09:41:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:22.325847 | orchestrator | ***** 2025-02-04 09:41:22.325860 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.325873 | orchestrator | 2025-02-04 09:41:22.325885 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-04 09:41:22.325898 | orchestrator | Tuesday 04 February 2025 09:39:15 +0000 (0:00:19.527) 0:02:52.415 ****** 2025-02-04 09:41:22.325910 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:22.325928 | orchestrator | 2025-02-04 09:41:22.325940 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-02-04 09:41:22.325953 | orchestrator | 2025-02-04 09:41:22.325965 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-02-04 09:41:22.325978 | orchestrator | Tuesday 04 February 2025 09:39:19 +0000 (0:00:03.443) 0:02:55.858 ****** 2025-02-04 09:41:22.325990 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.326003 | orchestrator | 2025-02-04 09:41:22.326056 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-02-04 09:41:22.326073 | orchestrator | Tuesday 04 February 2025 09:39:39 +0000 (0:00:20.878) 0:03:16.737 ****** 2025-02-04 09:41:22.326085 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.326098 | orchestrator | 2025-02-04 09:41:22.326110 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-02-04 09:41:22.326123 | orchestrator | Tuesday 04 February 2025 09:39:40 +0000 (0:00:00.622) 0:03:17.359 ****** 2025-02-04 09:41:22.326135 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.326148 | orchestrator | 2025-02-04 09:41:22.326176 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-02-04 09:41:22.326189 | orchestrator | 2025-02-04 09:41:22.326202 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-02-04 09:41:22.326214 | orchestrator | Tuesday 04 February 2025 09:39:43 +0000 (0:00:03.238) 0:03:20.598 ****** 2025-02-04 09:41:22.326227 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:22.326239 | orchestrator | 2025-02-04 09:41:22.326252 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-02-04 09:41:22.326264 | orchestrator | Tuesday 04 February 2025 09:39:44 +0000 (0:00:00.734) 0:03:21.332 ****** 2025-02-04 09:41:22.326277 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.326289 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.326301 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:22.326314 | orchestrator | 2025-02-04 09:41:22.326326 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-02-04 09:41:22.326339 | orchestrator | Tuesday 04 February 2025 09:39:47 +0000 (0:00:02.760) 0:03:24.093 ****** 2025-02-04 09:41:22.326352 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.326364 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.326377 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.326389 | orchestrator | 2025-02-04 09:41:22.326408 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-02-04 09:41:22.326421 | orchestrator | Tuesday 04 February 2025 09:39:50 +0000 (0:00:02.835) 0:03:26.928 ****** 2025-02-04 09:41:22.326433 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.326445 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.326458 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.326470 | orchestrator | 2025-02-04 09:41:22.326483 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-02-04 09:41:22.326495 | orchestrator | Tuesday 04 February 2025 09:39:52 +0000 (0:00:02.582) 0:03:29.510 ****** 2025-02-04 09:41:22.326508 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:22.326520 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:22.326532 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:22.326545 | orchestrator | 2025-02-04 09:41:22.326557 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-02-04 09:41:22.326569 | orchestrator | Tuesday 04 February 2025 09:39:55 +0000 (0:00:02.732) 0:03:32.242 ****** 2025-02-04 09:41:22.326582 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service to be ready through VIP (6 retries left). 2025-02-04 09:41:22.326601 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service to be ready through VIP (6 retries left). 2025-02-04 09:41:22.326614 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service to be ready through VIP (6 retries left). 2025-02-04 09:41:22.326627 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service to be ready through VIP (5 retries left). 2025-02-04 09:41:22.326639 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service to be ready through VIP (5 retries left). 2025-02-04 09:41:22.326652 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service to be ready through VIP (5 retries left). 2025-02-04 09:41:22.326664 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service to be ready through VIP (4 retries left). 2025-02-04 09:41:22.326677 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service to be ready through VIP (4 retries left). 2025-02-04 09:41:22.326696 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service to be ready through VIP (4 retries left). 2025-02-04 09:41:22.326710 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service to be ready through VIP (3 retries left). 2025-02-04 09:41:22.326722 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service to be ready through VIP (3 retries left). 2025-02-04 09:41:22.326734 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service to be ready through VIP (3 retries left). 2025-02-04 09:41:22.326747 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service to be ready through VIP (2 retries left). 2025-02-04 09:41:22.326760 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service to be ready through VIP (2 retries left). 2025-02-04 09:41:22.326772 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service to be ready through VIP (2 retries left). 2025-02-04 09:41:22.326784 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service to be ready through VIP (1 retries left). 2025-02-04 09:41:22.326797 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service to be ready through VIP (1 retries left). 2025-02-04 09:41:22.326809 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for MariaDB service to be ready through VIP (1 retries left). 2025-02-04 09:41:22.326822 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"attempts": 6, "changed": false, "cmd": ["docker", "exec", "mariadb", "mysql", "-h", "api-int.testbed.osism.xyz", "-P", "3306", "-u", "root", "-ppassword", "-e", "show databases;"], "delta": "0:00:02.406321", "end": "2025-02-04 09:41:15.173506", "msg": "non-zero return code", "rc": 1, "start": "2025-02-04 09:41:12.767185", "stderr": "ERROR 2002 (HY000): Can't connect to server on 'api-int.testbed.osism.xyz' (115)", "stderr_lines": ["ERROR 2002 (HY000): Can't connect to server on 'api-int.testbed.osism.xyz' (115)"], "stdout": "", "stdout_lines": []} 2025-02-04 09:41:22.326836 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"attempts": 6, "changed": false, "cmd": ["docker", "exec", "mariadb", "mysql", "-h", "api-int.testbed.osism.xyz", "-P", "3306", "-u", "root", "-ppassword", "-e", "show databases;"], "delta": "0:00:02.503230", "end": "2025-02-04 09:41:15.366082", "msg": "non-zero return code", "rc": 1, "start": "2025-02-04 09:41:12.862852", "stderr": "ERROR 2002 (HY000): Can't connect to server on 'api-int.testbed.osism.xyz' (115)", "stderr_lines": ["ERROR 2002 (HY000): Can't connect to server on 'api-int.testbed.osism.xyz' (115)"], "stdout": "", "stdout_lines": []} 2025-02-04 09:41:22.326850 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 6, "changed": false, "cmd": ["docker", "exec", "mariadb", "mysql", "-h", "api-int.testbed.osism.xyz", "-P", "3306", "-u", "root", "-ppassword", "-e", "show databases;"], "delta": "0:00:02.506670", "end": "2025-02-04 09:41:19.585479", "msg": "non-zero return code", "rc": 1, "start": "2025-02-04 09:41:17.078809", "stderr": "ERROR 2002 (HY000): Can't connect to server on 'api-int.testbed.osism.xyz' (115)", "stderr_lines": ["ERROR 2002 (HY000): Can't connect to server on 'api-int.testbed.osism.xyz' (115)"], "stdout": "", "stdout_lines": []} 2025-02-04 09:41:22.326868 | orchestrator | 2025-02-04 09:41:22.326881 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:41:22.326894 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-02-04 09:41:22.326908 | orchestrator | testbed-node-0 : ok=33  changed=16  unreachable=0 failed=1  skipped=7  rescued=0 ignored=1  2025-02-04 09:41:22.326920 | orchestrator | testbed-node-1 : ok=19  changed=8  unreachable=0 failed=1  skipped=14  rescued=0 ignored=1  2025-02-04 09:41:22.326933 | orchestrator | testbed-node-2 : ok=19  changed=8  unreachable=0 failed=1  skipped=14  rescued=0 ignored=1  2025-02-04 09:41:22.326945 | orchestrator | 2025-02-04 09:41:22.326958 | orchestrator | 2025-02-04 09:41:22.326971 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:41:22.326983 | orchestrator | Tuesday 04 February 2025 09:41:19 +0000 (0:01:24.251) 0:04:56.494 ****** 2025-02-04 09:41:22.326996 | orchestrator | =============================================================================== 2025-02-04 09:41:22.327008 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP ------------- 84.25s 2025-02-04 09:41:22.327021 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.41s 2025-02-04 09:41:22.327038 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 34.10s 2025-02-04 09:41:25.370578 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 20.88s 2025-02-04 09:41:25.370839 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 16.56s 2025-02-04 09:41:25.370876 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 12.04s 2025-02-04 09:41:25.370901 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.23s 2025-02-04 09:41:25.370927 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.56s 2025-02-04 09:41:25.370951 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 7.77s 2025-02-04 09:41:25.370975 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.32s 2025-02-04 09:41:25.370999 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 6.10s 2025-02-04 09:41:25.371021 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 5.84s 2025-02-04 09:41:25.371046 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.75s 2025-02-04 09:41:25.371072 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.33s 2025-02-04 09:41:25.371098 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.54s 2025-02-04 09:41:25.371125 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.24s 2025-02-04 09:41:25.371182 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.84s 2025-02-04 09:41:25.371209 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.76s 2025-02-04 09:41:25.371233 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.73s 2025-02-04 09:41:25.371259 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.58s 2025-02-04 09:41:25.371309 | orchestrator | 2025-02-04 09:41:25 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:25.378483 | orchestrator | 2025-02-04 09:41:25.378576 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-04 09:41:25.378623 | orchestrator | 2025-02-04 09:41:25.378737 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-02-04 09:41:25.378760 | orchestrator | 2025-02-04 09:41:25.378775 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-04 09:41:25.378790 | orchestrator | Tuesday 04 February 2025 09:27:16 +0000 (0:00:02.217) 0:00:02.217 ****** 2025-02-04 09:41:25.378805 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.378821 | orchestrator | 2025-02-04 09:41:25.378835 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-04 09:41:25.378849 | orchestrator | Tuesday 04 February 2025 09:27:18 +0000 (0:00:01.844) 0:00:04.061 ****** 2025-02-04 09:41:25.378864 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-02-04 09:41:25.378879 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-02-04 09:41:25.378894 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-02-04 09:41:25.378909 | orchestrator | 2025-02-04 09:41:25.378923 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-04 09:41:25.378937 | orchestrator | Tuesday 04 February 2025 09:27:19 +0000 (0:00:01.026) 0:00:05.087 ****** 2025-02-04 09:41:25.378953 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.379531 | orchestrator | 2025-02-04 09:41:25.379550 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-04 09:41:25.379565 | orchestrator | Tuesday 04 February 2025 09:27:21 +0000 (0:00:01.470) 0:00:06.558 ****** 2025-02-04 09:41:25.379580 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.379603 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.379618 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.379633 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.379647 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.379661 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.379675 | orchestrator | 2025-02-04 09:41:25.379690 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-04 09:41:25.379704 | orchestrator | Tuesday 04 February 2025 09:27:22 +0000 (0:00:01.789) 0:00:08.347 ****** 2025-02-04 09:41:25.379718 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.380097 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.380756 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.380777 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.380791 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.380806 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.380820 | orchestrator | 2025-02-04 09:41:25.380835 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-04 09:41:25.380849 | orchestrator | Tuesday 04 February 2025 09:27:23 +0000 (0:00:01.109) 0:00:09.457 ****** 2025-02-04 09:41:25.380864 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.380878 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.380892 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.380906 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.380921 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.380935 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.380949 | orchestrator | 2025-02-04 09:41:25.380964 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-04 09:41:25.380978 | orchestrator | Tuesday 04 February 2025 09:27:25 +0000 (0:00:01.579) 0:00:11.036 ****** 2025-02-04 09:41:25.380992 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.381006 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.381020 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.381035 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.381086 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.381824 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.381859 | orchestrator | 2025-02-04 09:41:25.381915 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-04 09:41:25.381931 | orchestrator | Tuesday 04 February 2025 09:27:26 +0000 (0:00:01.104) 0:00:12.140 ****** 2025-02-04 09:41:25.381945 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.381959 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.381973 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.382408 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.382464 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.382480 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.383206 | orchestrator | 2025-02-04 09:41:25.383227 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-04 09:41:25.383243 | orchestrator | Tuesday 04 February 2025 09:27:27 +0000 (0:00:00.908) 0:00:13.049 ****** 2025-02-04 09:41:25.383259 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.383273 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.383288 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.383302 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.383317 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.383332 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.383357 | orchestrator | 2025-02-04 09:41:25.383372 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-04 09:41:25.383388 | orchestrator | Tuesday 04 February 2025 09:27:28 +0000 (0:00:00.792) 0:00:13.842 ****** 2025-02-04 09:41:25.383403 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.383419 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.383434 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.383449 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.383463 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.383478 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.383492 | orchestrator | 2025-02-04 09:41:25.383507 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-04 09:41:25.383522 | orchestrator | Tuesday 04 February 2025 09:27:29 +0000 (0:00:01.093) 0:00:14.936 ****** 2025-02-04 09:41:25.383537 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.383552 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.383567 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.383582 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.383596 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.383611 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.383626 | orchestrator | 2025-02-04 09:41:25.383735 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-04 09:41:25.383758 | orchestrator | Tuesday 04 February 2025 09:27:30 +0000 (0:00:01.035) 0:00:15.972 ****** 2025-02-04 09:41:25.383774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:41:25.383789 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:41:25.383804 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:41:25.383819 | orchestrator | 2025-02-04 09:41:25.383834 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-04 09:41:25.383849 | orchestrator | Tuesday 04 February 2025 09:27:31 +0000 (0:00:00.887) 0:00:16.859 ****** 2025-02-04 09:41:25.383864 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.383879 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.383894 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.383908 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.383923 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.383938 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.383952 | orchestrator | 2025-02-04 09:41:25.383968 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-04 09:41:25.383983 | orchestrator | Tuesday 04 February 2025 09:27:32 +0000 (0:00:01.538) 0:00:18.397 ****** 2025-02-04 09:41:25.384004 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:41:25.384033 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:41:25.384049 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:41:25.384064 | orchestrator | 2025-02-04 09:41:25.384078 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-04 09:41:25.384093 | orchestrator | Tuesday 04 February 2025 09:27:35 +0000 (0:00:03.145) 0:00:21.542 ****** 2025-02-04 09:41:25.384108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:41:25.384123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:41:25.384138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:41:25.384212 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.384228 | orchestrator | 2025-02-04 09:41:25.384243 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-04 09:41:25.384257 | orchestrator | Tuesday 04 February 2025 09:27:36 +0000 (0:00:00.917) 0:00:22.459 ****** 2025-02-04 09:41:25.384273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384306 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384320 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.384334 | orchestrator | 2025-02-04 09:41:25.384348 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-04 09:41:25.384362 | orchestrator | Tuesday 04 February 2025 09:27:38 +0000 (0:00:01.668) 0:00:24.128 ****** 2025-02-04 09:41:25.384377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384431 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.384446 | orchestrator | 2025-02-04 09:41:25.384460 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-04 09:41:25.384563 | orchestrator | Tuesday 04 February 2025 09:27:38 +0000 (0:00:00.183) 0:00:24.312 ****** 2025-02-04 09:41:25.384585 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-04 09:27:33.624109', 'end': '2025-02-04 09:27:33.880830', 'delta': '0:00:00.256721', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384616 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-04 09:27:34.456080', 'end': '2025-02-04 09:27:34.684243', 'delta': '0:00:00.228163', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-04 09:27:35.296895', 'end': '2025-02-04 09:27:35.569751', 'delta': '0:00:00.272856', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-04 09:41:25.384647 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.384661 | orchestrator | 2025-02-04 09:41:25.384676 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-04 09:41:25.384690 | orchestrator | Tuesday 04 February 2025 09:27:38 +0000 (0:00:00.187) 0:00:24.499 ****** 2025-02-04 09:41:25.384705 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.384718 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.384731 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.384743 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.384756 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.384769 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.384781 | orchestrator | 2025-02-04 09:41:25.384794 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-04 09:41:25.384807 | orchestrator | Tuesday 04 February 2025 09:27:40 +0000 (0:00:01.665) 0:00:26.165 ****** 2025-02-04 09:41:25.384820 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.384833 | orchestrator | 2025-02-04 09:41:25.384846 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-04 09:41:25.384859 | orchestrator | Tuesday 04 February 2025 09:27:42 +0000 (0:00:01.709) 0:00:27.874 ****** 2025-02-04 09:41:25.384871 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.384884 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.384897 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.384909 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.384922 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.384934 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.384947 | orchestrator | 2025-02-04 09:41:25.384960 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-04 09:41:25.384972 | orchestrator | Tuesday 04 February 2025 09:27:43 +0000 (0:00:01.328) 0:00:29.203 ****** 2025-02-04 09:41:25.384985 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.384997 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385010 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385033 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385046 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385059 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385071 | orchestrator | 2025-02-04 09:41:25.385084 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-04 09:41:25.385118 | orchestrator | Tuesday 04 February 2025 09:27:45 +0000 (0:00:01.587) 0:00:30.791 ****** 2025-02-04 09:41:25.385131 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385144 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385212 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385226 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385239 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385252 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385265 | orchestrator | 2025-02-04 09:41:25.385275 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-04 09:41:25.385351 | orchestrator | Tuesday 04 February 2025 09:27:46 +0000 (0:00:01.163) 0:00:31.954 ****** 2025-02-04 09:41:25.385366 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385376 | orchestrator | 2025-02-04 09:41:25.385387 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-04 09:41:25.385397 | orchestrator | Tuesday 04 February 2025 09:27:46 +0000 (0:00:00.201) 0:00:32.155 ****** 2025-02-04 09:41:25.385408 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385418 | orchestrator | 2025-02-04 09:41:25.385428 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-04 09:41:25.385439 | orchestrator | Tuesday 04 February 2025 09:27:46 +0000 (0:00:00.342) 0:00:32.498 ****** 2025-02-04 09:41:25.385449 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385459 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385469 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385480 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385492 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385504 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385514 | orchestrator | 2025-02-04 09:41:25.385525 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-04 09:41:25.385535 | orchestrator | Tuesday 04 February 2025 09:27:47 +0000 (0:00:00.967) 0:00:33.465 ****** 2025-02-04 09:41:25.385546 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385556 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385566 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385576 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385586 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385596 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385607 | orchestrator | 2025-02-04 09:41:25.385617 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-04 09:41:25.385627 | orchestrator | Tuesday 04 February 2025 09:27:49 +0000 (0:00:01.376) 0:00:34.842 ****** 2025-02-04 09:41:25.385638 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385648 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385658 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385669 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385679 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385689 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385699 | orchestrator | 2025-02-04 09:41:25.385709 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-04 09:41:25.385725 | orchestrator | Tuesday 04 February 2025 09:27:50 +0000 (0:00:01.090) 0:00:35.932 ****** 2025-02-04 09:41:25.385736 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385746 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385756 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385766 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385777 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385787 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385804 | orchestrator | 2025-02-04 09:41:25.385815 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-04 09:41:25.385825 | orchestrator | Tuesday 04 February 2025 09:27:51 +0000 (0:00:01.178) 0:00:37.111 ****** 2025-02-04 09:41:25.385835 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385846 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.385856 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.385866 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.385924 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.385936 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.385947 | orchestrator | 2025-02-04 09:41:25.385958 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-04 09:41:25.385969 | orchestrator | Tuesday 04 February 2025 09:27:52 +0000 (0:00:01.008) 0:00:38.119 ****** 2025-02-04 09:41:25.385980 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.385991 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.386001 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.386012 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.386051 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.386062 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.386072 | orchestrator | 2025-02-04 09:41:25.386083 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-04 09:41:25.386093 | orchestrator | Tuesday 04 February 2025 09:27:53 +0000 (0:00:01.398) 0:00:39.518 ****** 2025-02-04 09:41:25.386103 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.386113 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.386124 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.386134 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.386144 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.386171 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.386182 | orchestrator | 2025-02-04 09:41:25.386192 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-04 09:41:25.386203 | orchestrator | Tuesday 04 February 2025 09:27:55 +0000 (0:00:01.108) 0:00:40.627 ****** 2025-02-04 09:41:25.386214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b56b489--397c--55c4--ba6f--4e97fbbc410a-osd--block--8b56b489--397c--55c4--ba6f--4e97fbbc410a', 'dm-uuid-LVM-B01lyk59fPYk0eb53k6ARpN49GK9UbktoCtaZuK3f9ilZ88Yz3DPGOJfJ7M72CzT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fd89a215--a86e--5b79--8dd1--0773a21fefe5-osd--block--fd89a215--a86e--5b79--8dd1--0773a21fefe5', 'dm-uuid-LVM-IkDaySRH5szLgU2bTjKQZ3ENZv3wycaqNKtcDSCduJglCEODV2KwhxRRIFL7pNKX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9a0f878--ef24--53af--8bd4--10a12036221e-osd--block--a9a0f878--ef24--53af--8bd4--10a12036221e', 'dm-uuid-LVM-toIrQY9goYVMdA4cmLFXcIfN2LfLgZAyj38QOTwc46WWHX3hAIdk7Y68fCfrJ73X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--857e455f--002b--509a--b66d--9c4a1025daeb-osd--block--857e455f--002b--509a--b66d--9c4a1025daeb', 'dm-uuid-LVM-1HZKtUa3373KDVkDkv437mab9N8siFTP3p90pYgh2XLoEmLVDzYugBvWI9Ll2Pun'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part1', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part14', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part15', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part16', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8b56b489--397c--55c4--ba6f--4e97fbbc410a-osd--block--8b56b489--397c--55c4--ba6f--4e97fbbc410a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4j33pY-HrV3-48cg-gClm-1bGn-ALC1-s4Wusx', 'scsi-0QEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68', 'scsi-SQEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fd89a215--a86e--5b79--8dd1--0773a21fefe5-osd--block--fd89a215--a86e--5b79--8dd1--0773a21fefe5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RbZGLm-cfWK-0BdY-mNOU-51Ag-ss06-oFGHmb', 'scsi-0QEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b', 'scsi-SQEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe', 'scsi-SQEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39-osd--block--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39', 'dm-uuid-LVM-fGnccIHSu83Z1tRPuKlHYCH08p8E2cIuC06fj2NOjjTwp3wqLT4OmeQMLVcBrQOu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part1', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part14', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part15', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part16', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89dbb78a--6e2f--596a--9aad--74f54f8525ce-osd--block--89dbb78a--6e2f--596a--9aad--74f54f8525ce', 'dm-uuid-LVM-3riqEx7zrBBhsVWPuFVDVu06C9kXsbHwNmX9ZVi0vsW7DR7YKDZnsvcSRAnIq8vW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'schedu2025-02-04 09:41:25 | INFO  | Task cd29dea3-7f34-4656-ac40-3dc8e1d4db53 is in state SUCCESS 2025-02-04 09:41:25.386881 | orchestrator | ler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9a0f878--ef24--53af--8bd4--10a12036221e-osd--block--a9a0f878--ef24--53af--8bd4--10a12036221e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SyGV31-y20g-Nbpy-rOOh-TXyN-2Jia-a0zVGL', 'scsi-0QEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8', 'scsi-SQEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.386925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.386957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--857e455f--002b--509a--b66d--9c4a1025daeb-osd--block--857e455f--002b--509a--b66d--9c4a1025daeb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S7LGzz-ZU50-C9Ox-v5WD-WWVD-BOCx-m3V4Xc', 'scsi-0QEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae', 'scsi-SQEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40', 'scsi-SQEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part1', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part14', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part15', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part16', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39-osd--block--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KASbhx-kt6q-3lXL-AKq7-KcOb-OQf8-r6zGzT', 'scsi-0QEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a', 'scsi-SQEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--89dbb78a--6e2f--596a--9aad--74f54f8525ce-osd--block--89dbb78a--6e2f--596a--9aad--74f54f8525ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AewPDk-8lGP-42I1-i41I-DT0o-5y8p-y7t0Gg', 'scsi-0QEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa', 'scsi-SQEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543', 'scsi-SQEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387229 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.387241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part1', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part14', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part15', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part16', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17d2e1d1-143d-4df8-b794-06a49264520c', 'scsi-SQEMU_QEMU_HARDDISK_17d2e1d1-143d-4df8-b794-06a49264520c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212bd4e9-d9e4-4fb6-aa1a-c75a1354e796', 'scsi-SQEMU_QEMU_HARDDISK_212bd4e9-d9e4-4fb6-aa1a-c75a1354e796'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84ce1e5e-93f1-4e17-9c74-c98d07335b49', 'scsi-SQEMU_QEMU_HARDDISK_84ce1e5e-93f1-4e17-9c74-c98d07335b49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387547 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.387558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387675 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.387685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387706 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.387717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef', 'scsi-SQEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part1', 'scsi-SQEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part14', 'scsi-SQEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part15', 'scsi-SQEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part16', 'scsi-SQEMU_QEMU_HARDDISK_213e25e0-25b4-4110-96e2-3b7485daf5ef-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_42292afd-cea6-4e4a-b321-0bc7a4bab513', 'scsi-SQEMU_QEMU_HARDDISK_42292afd-cea6-4e4a-b321-0bc7a4bab513'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_263b1482-edb0-40a4-b8be-a0e8e90b1cea', 'scsi-SQEMU_QEMU_HARDDISK_263b1482-edb0-40a4-b8be-a0e8e90b1cea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d0f6c82-70d7-420b-af8d-33a5666fb869', 'scsi-SQEMU_QEMU_HARDDISK_4d0f6c82-70d7-420b-af8d-33a5666fb869'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.387960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387976 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.387986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.387997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.388007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.388018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:41:25.388029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24', 'scsi-SQEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part1', 'scsi-SQEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part14', 'scsi-SQEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part15', 'scsi-SQEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part16', 'scsi-SQEMU_QEMU_HARDDISK_047150d6-6f6c-4019-b80e-4be9a7d65c24-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.388096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38b972c8-2def-4835-a52d-e389734565af', 'scsi-SQEMU_QEMU_HARDDISK_38b972c8-2def-4835-a52d-e389734565af'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.388111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_901ca6f0-6e57-48c8-bbd3-08a878599b73', 'scsi-SQEMU_QEMU_HARDDISK_901ca6f0-6e57-48c8-bbd3-08a878599b73'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.388122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_843e4ddb-3456-4bc2-9151-43109d21e883', 'scsi-SQEMU_QEMU_HARDDISK_843e4ddb-3456-4bc2-9151-43109d21e883'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.388133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:41:25.388149 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.388213 | orchestrator | 2025-02-04 09:41:25.388223 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-04 09:41:25.388234 | orchestrator | Tuesday 04 February 2025 09:27:58 +0000 (0:00:03.114) 0:00:43.742 ****** 2025-02-04 09:41:25.388245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.388255 | orchestrator | 2025-02-04 09:41:25.388266 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-04 09:41:25.388276 | orchestrator | Tuesday 04 February 2025 09:27:58 +0000 (0:00:00.770) 0:00:44.512 ****** 2025-02-04 09:41:25.388300 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.388311 | orchestrator | 2025-02-04 09:41:25.388322 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-04 09:41:25.388332 | orchestrator | Tuesday 04 February 2025 09:27:59 +0000 (0:00:00.233) 0:00:44.745 ****** 2025-02-04 09:41:25.388343 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.388353 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.388363 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.388373 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.388384 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.388399 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.388410 | orchestrator | 2025-02-04 09:41:25.388420 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-04 09:41:25.388431 | orchestrator | Tuesday 04 February 2025 09:28:00 +0000 (0:00:00.975) 0:00:45.721 ****** 2025-02-04 09:41:25.388441 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.388452 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.388462 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.388472 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.388483 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.388493 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.388503 | orchestrator | 2025-02-04 09:41:25.388513 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-04 09:41:25.388524 | orchestrator | Tuesday 04 February 2025 09:28:02 +0000 (0:00:01.941) 0:00:47.662 ****** 2025-02-04 09:41:25.388534 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.388544 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.388555 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.388565 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.388575 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.388586 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.388596 | orchestrator | 2025-02-04 09:41:25.388607 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-04 09:41:25.388676 | orchestrator | Tuesday 04 February 2025 09:28:03 +0000 (0:00:01.197) 0:00:48.859 ****** 2025-02-04 09:41:25.388691 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.388702 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.388714 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.388723 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.388732 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.388741 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.388750 | orchestrator | 2025-02-04 09:41:25.388759 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-04 09:41:25.388768 | orchestrator | Tuesday 04 February 2025 09:28:04 +0000 (0:00:01.161) 0:00:50.021 ****** 2025-02-04 09:41:25.388778 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.388787 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.388796 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.388804 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.388813 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.388822 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.388831 | orchestrator | 2025-02-04 09:41:25.388840 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-04 09:41:25.388850 | orchestrator | Tuesday 04 February 2025 09:28:05 +0000 (0:00:01.198) 0:00:51.219 ****** 2025-02-04 09:41:25.388864 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.388874 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.388883 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.388892 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.388901 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.388910 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.388919 | orchestrator | 2025-02-04 09:41:25.388928 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-04 09:41:25.388937 | orchestrator | Tuesday 04 February 2025 09:28:07 +0000 (0:00:01.491) 0:00:52.710 ****** 2025-02-04 09:41:25.388946 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.388955 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.388964 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.388973 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.388982 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.388991 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.389000 | orchestrator | 2025-02-04 09:41:25.389009 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-04 09:41:25.389029 | orchestrator | Tuesday 04 February 2025 09:28:08 +0000 (0:00:01.107) 0:00:53.818 ****** 2025-02-04 09:41:25.389039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:41:25.389049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:41:25.389062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:41:25.389071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:41:25.389081 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.389090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:41:25.389099 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:41:25.389108 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.389117 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:41:25.389127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.389135 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:41:25.389144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.389169 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:41:25.389178 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.389188 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-04 09:41:25.389197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.389206 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.389215 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-04 09:41:25.389224 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-04 09:41:25.389233 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-04 09:41:25.389241 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.389250 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-04 09:41:25.389260 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-04 09:41:25.389269 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.389278 | orchestrator | 2025-02-04 09:41:25.389287 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-04 09:41:25.389295 | orchestrator | Tuesday 04 February 2025 09:28:11 +0000 (0:00:02.969) 0:00:56.787 ****** 2025-02-04 09:41:25.389305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:41:25.389316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:41:25.389326 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:41:25.389337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:41:25.389347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:41:25.389363 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.389373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:41:25.389382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:41:25.389392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:41:25.389403 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.389414 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.389423 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:41:25.389433 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.389444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.389453 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-04 09:41:25.389514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.389528 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.389539 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-04 09:41:25.389549 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-04 09:41:25.389558 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-04 09:41:25.389569 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.389579 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-04 09:41:25.389589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-04 09:41:25.389599 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.389608 | orchestrator | 2025-02-04 09:41:25.389649 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-04 09:41:25.389660 | orchestrator | Tuesday 04 February 2025 09:28:15 +0000 (0:00:03.819) 0:01:00.607 ****** 2025-02-04 09:41:25.389671 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-04 09:41:25.389682 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-04 09:41:25.389691 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-04 09:41:25.389700 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-04 09:41:25.389710 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-04 09:41:25.389719 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-04 09:41:25.389728 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:41:25.389737 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-02-04 09:41:25.389746 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-04 09:41:25.389756 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-02-04 09:41:25.389765 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-04 09:41:25.389774 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-04 09:41:25.389783 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-02-04 09:41:25.389792 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-02-04 09:41:25.389801 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-04 09:41:25.389810 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-04 09:41:25.389819 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-02-04 09:41:25.389828 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-02-04 09:41:25.389837 | orchestrator | 2025-02-04 09:41:25.389846 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-04 09:41:25.389855 | orchestrator | Tuesday 04 February 2025 09:28:19 +0000 (0:00:04.382) 0:01:04.990 ****** 2025-02-04 09:41:25.389865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:41:25.389874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:41:25.389883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:41:25.389895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:41:25.389913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:41:25.389922 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:41:25.389931 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.389941 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.389954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:41:25.389964 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:41:25.389973 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:41:25.389982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.389991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.390000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.390009 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.390039 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-04 09:41:25.390049 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-04 09:41:25.390058 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-04 09:41:25.390067 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.390075 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.390084 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-04 09:41:25.390093 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-04 09:41:25.390101 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-04 09:41:25.390110 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.390119 | orchestrator | 2025-02-04 09:41:25.390128 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-04 09:41:25.390136 | orchestrator | Tuesday 04 February 2025 09:28:20 +0000 (0:00:01.237) 0:01:06.227 ****** 2025-02-04 09:41:25.390145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:41:25.390167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:41:25.390176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:41:25.390185 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:41:25.390193 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:41:25.390202 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.390211 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:41:25.390219 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:41:25.390228 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:41:25.390236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:41:25.390245 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.390254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.390314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.390327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.390337 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.390346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-02-04 09:41:25.390355 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-02-04 09:41:25.390364 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.390373 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-02-04 09:41:25.390382 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.390391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-02-04 09:41:25.390400 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-02-04 09:41:25.390409 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-02-04 09:41:25.390418 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.390427 | orchestrator | 2025-02-04 09:41:25.390436 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-04 09:41:25.390450 | orchestrator | Tuesday 04 February 2025 09:28:21 +0000 (0:00:00.983) 0:01:07.211 ****** 2025-02-04 09:41:25.390460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:41:25.390470 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:41:25.390479 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:41:25.390488 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.390497 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:41:25.390507 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:41:25.390519 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:41:25.390529 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:41:25.390538 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.390547 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:41:25.390556 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:41:25.390565 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-04 09:41:25.390574 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:41:25.390584 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:41:25.390593 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.390602 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:41:25.390611 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-02-04 09:41:25.390620 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:41:25.390629 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:41:25.390638 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:41:25.390647 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-02-04 09:41:25.390656 | orchestrator | 2025-02-04 09:41:25.390665 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-04 09:41:25.390674 | orchestrator | Tuesday 04 February 2025 09:28:22 +0000 (0:00:01.132) 0:01:08.343 ****** 2025-02-04 09:41:25.390683 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.390692 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.390701 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.390710 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.390719 | orchestrator | 2025-02-04 09:41:25.390728 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.390751 | orchestrator | Tuesday 04 February 2025 09:28:23 +0000 (0:00:01.169) 0:01:09.513 ****** 2025-02-04 09:41:25.390760 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.390770 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.390779 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.390788 | orchestrator | 2025-02-04 09:41:25.390797 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.390806 | orchestrator | Tuesday 04 February 2025 09:28:24 +0000 (0:00:00.879) 0:01:10.393 ****** 2025-02-04 09:41:25.390815 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.390830 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.390838 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.390847 | orchestrator | 2025-02-04 09:41:25.390856 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.390864 | orchestrator | Tuesday 04 February 2025 09:28:25 +0000 (0:00:00.647) 0:01:11.040 ****** 2025-02-04 09:41:25.390873 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.390882 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.390891 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.390899 | orchestrator | 2025-02-04 09:41:25.390908 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.390961 | orchestrator | Tuesday 04 February 2025 09:28:26 +0000 (0:00:01.178) 0:01:12.219 ****** 2025-02-04 09:41:25.390974 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.390983 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.390991 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.391000 | orchestrator | 2025-02-04 09:41:25.391009 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.391018 | orchestrator | Tuesday 04 February 2025 09:28:27 +0000 (0:00:00.978) 0:01:13.197 ****** 2025-02-04 09:41:25.391026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.391035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.391043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.391052 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391066 | orchestrator | 2025-02-04 09:41:25.391075 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.391085 | orchestrator | Tuesday 04 February 2025 09:28:28 +0000 (0:00:01.150) 0:01:14.348 ****** 2025-02-04 09:41:25.391094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.391103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.391112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.391121 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391130 | orchestrator | 2025-02-04 09:41:25.391139 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.391148 | orchestrator | Tuesday 04 February 2025 09:28:29 +0000 (0:00:00.463) 0:01:14.811 ****** 2025-02-04 09:41:25.391194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.391204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.391212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.391221 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391230 | orchestrator | 2025-02-04 09:41:25.391238 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.391247 | orchestrator | Tuesday 04 February 2025 09:28:29 +0000 (0:00:00.429) 0:01:15.241 ****** 2025-02-04 09:41:25.391256 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.391264 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.391273 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.391282 | orchestrator | 2025-02-04 09:41:25.391290 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.391299 | orchestrator | Tuesday 04 February 2025 09:28:30 +0000 (0:00:00.569) 0:01:15.810 ****** 2025-02-04 09:41:25.391308 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-04 09:41:25.391317 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-04 09:41:25.391325 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-04 09:41:25.391334 | orchestrator | 2025-02-04 09:41:25.391342 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.391351 | orchestrator | Tuesday 04 February 2025 09:28:31 +0000 (0:00:01.596) 0:01:17.407 ****** 2025-02-04 09:41:25.391359 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391368 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.391383 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.391392 | orchestrator | 2025-02-04 09:41:25.391404 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.391413 | orchestrator | Tuesday 04 February 2025 09:28:32 +0000 (0:00:00.722) 0:01:18.130 ****** 2025-02-04 09:41:25.391422 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391435 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.391443 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.391452 | orchestrator | 2025-02-04 09:41:25.391461 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.391470 | orchestrator | Tuesday 04 February 2025 09:28:33 +0000 (0:00:00.777) 0:01:18.907 ****** 2025-02-04 09:41:25.391478 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.391487 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.391496 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391504 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.391513 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.391522 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.391530 | orchestrator | 2025-02-04 09:41:25.391539 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.391548 | orchestrator | Tuesday 04 February 2025 09:28:34 +0000 (0:00:00.835) 0:01:19.743 ****** 2025-02-04 09:41:25.391557 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.391565 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391580 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.391589 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.391599 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.391610 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.391620 | orchestrator | 2025-02-04 09:41:25.391629 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.391639 | orchestrator | Tuesday 04 February 2025 09:28:35 +0000 (0:00:01.052) 0:01:20.796 ****** 2025-02-04 09:41:25.391649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.391659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.391668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.391678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.391688 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.391748 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.391760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.391769 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391778 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.391787 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.391796 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.391810 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.391819 | orchestrator | 2025-02-04 09:41:25.391828 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-04 09:41:25.391837 | orchestrator | Tuesday 04 February 2025 09:28:36 +0000 (0:00:01.047) 0:01:21.843 ****** 2025-02-04 09:41:25.391846 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.391856 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.391865 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.391874 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.391882 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.391895 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.391903 | orchestrator | 2025-02-04 09:41:25.391911 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-04 09:41:25.391920 | orchestrator | Tuesday 04 February 2025 09:28:37 +0000 (0:00:01.551) 0:01:23.395 ****** 2025-02-04 09:41:25.391928 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:41:25.391936 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:41:25.391944 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:41:25.391952 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-04 09:41:25.391961 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-04 09:41:25.391969 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-04 09:41:25.391977 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-04 09:41:25.391985 | orchestrator | 2025-02-04 09:41:25.391993 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-04 09:41:25.392001 | orchestrator | Tuesday 04 February 2025 09:28:38 +0000 (0:00:01.035) 0:01:24.430 ****** 2025-02-04 09:41:25.392009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:41:25.392018 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:41:25.392026 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:41:25.392034 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-04 09:41:25.392042 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-04 09:41:25.392050 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-04 09:41:25.392058 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-04 09:41:25.392066 | orchestrator | 2025-02-04 09:41:25.392075 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.392083 | orchestrator | Tuesday 04 February 2025 09:28:41 +0000 (0:00:02.882) 0:01:27.313 ****** 2025-02-04 09:41:25.392092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.392101 | orchestrator | 2025-02-04 09:41:25.392109 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.392117 | orchestrator | Tuesday 04 February 2025 09:28:43 +0000 (0:00:01.747) 0:01:29.061 ****** 2025-02-04 09:41:25.392125 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392133 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392141 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392173 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.392184 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.392192 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.392200 | orchestrator | 2025-02-04 09:41:25.392208 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.392216 | orchestrator | Tuesday 04 February 2025 09:28:45 +0000 (0:00:01.787) 0:01:30.848 ****** 2025-02-04 09:41:25.392224 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392232 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392240 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.392249 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.392257 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392265 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.392273 | orchestrator | 2025-02-04 09:41:25.392285 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.392293 | orchestrator | Tuesday 04 February 2025 09:28:46 +0000 (0:00:01.126) 0:01:31.974 ****** 2025-02-04 09:41:25.392306 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392314 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.392322 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392330 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.392338 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.392346 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392354 | orchestrator | 2025-02-04 09:41:25.392362 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.392370 | orchestrator | Tuesday 04 February 2025 09:28:47 +0000 (0:00:00.842) 0:01:32.817 ****** 2025-02-04 09:41:25.392379 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.392387 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392395 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.392403 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392411 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.392419 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392427 | orchestrator | 2025-02-04 09:41:25.392481 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.392494 | orchestrator | Tuesday 04 February 2025 09:28:48 +0000 (0:00:01.293) 0:01:34.111 ****** 2025-02-04 09:41:25.392503 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392511 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392520 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392528 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.392536 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.392545 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.392553 | orchestrator | 2025-02-04 09:41:25.392561 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.392570 | orchestrator | Tuesday 04 February 2025 09:28:49 +0000 (0:00:01.095) 0:01:35.207 ****** 2025-02-04 09:41:25.392579 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392587 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392595 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392603 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392612 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392620 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392628 | orchestrator | 2025-02-04 09:41:25.392637 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.392645 | orchestrator | Tuesday 04 February 2025 09:28:50 +0000 (0:00:01.340) 0:01:36.547 ****** 2025-02-04 09:41:25.392653 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392662 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392670 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392679 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392687 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392695 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392704 | orchestrator | 2025-02-04 09:41:25.392712 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.392721 | orchestrator | Tuesday 04 February 2025 09:28:51 +0000 (0:00:00.690) 0:01:37.238 ****** 2025-02-04 09:41:25.392729 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392737 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392746 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392760 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392769 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392778 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392786 | orchestrator | 2025-02-04 09:41:25.392795 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.392803 | orchestrator | Tuesday 04 February 2025 09:28:52 +0000 (0:00:00.903) 0:01:38.141 ****** 2025-02-04 09:41:25.392811 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392820 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392828 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392844 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392853 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392861 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392869 | orchestrator | 2025-02-04 09:41:25.392878 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.392886 | orchestrator | Tuesday 04 February 2025 09:28:53 +0000 (0:00:00.691) 0:01:38.833 ****** 2025-02-04 09:41:25.392895 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.392903 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.392911 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.392920 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.392928 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.392936 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.392945 | orchestrator | 2025-02-04 09:41:25.392953 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.392962 | orchestrator | Tuesday 04 February 2025 09:28:54 +0000 (0:00:01.211) 0:01:40.045 ****** 2025-02-04 09:41:25.392970 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.392979 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.392987 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.392996 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.393004 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.393013 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.393021 | orchestrator | 2025-02-04 09:41:25.393029 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.393038 | orchestrator | Tuesday 04 February 2025 09:28:55 +0000 (0:00:01.286) 0:01:41.331 ****** 2025-02-04 09:41:25.393046 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393055 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393063 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393072 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393080 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393089 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393097 | orchestrator | 2025-02-04 09:41:25.393105 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.393114 | orchestrator | Tuesday 04 February 2025 09:28:56 +0000 (0:00:01.043) 0:01:42.375 ****** 2025-02-04 09:41:25.393124 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393133 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393142 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393166 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.393176 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.393185 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.393194 | orchestrator | 2025-02-04 09:41:25.393204 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.393213 | orchestrator | Tuesday 04 February 2025 09:28:57 +0000 (0:00:00.728) 0:01:43.103 ****** 2025-02-04 09:41:25.393222 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.393231 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.393240 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.393249 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393258 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393267 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393276 | orchestrator | 2025-02-04 09:41:25.393285 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.393295 | orchestrator | Tuesday 04 February 2025 09:28:58 +0000 (0:00:00.985) 0:01:44.089 ****** 2025-02-04 09:41:25.393304 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.393313 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.393322 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.393377 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393388 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393398 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393406 | orchestrator | 2025-02-04 09:41:25.393416 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.393430 | orchestrator | Tuesday 04 February 2025 09:28:59 +0000 (0:00:00.744) 0:01:44.833 ****** 2025-02-04 09:41:25.393439 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.393448 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.393457 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.393467 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393476 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393488 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393496 | orchestrator | 2025-02-04 09:41:25.393505 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.393513 | orchestrator | Tuesday 04 February 2025 09:29:00 +0000 (0:00:01.054) 0:01:45.888 ****** 2025-02-04 09:41:25.393521 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393529 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393537 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393545 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393553 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393561 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393569 | orchestrator | 2025-02-04 09:41:25.393577 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.393586 | orchestrator | Tuesday 04 February 2025 09:29:01 +0000 (0:00:00.943) 0:01:46.831 ****** 2025-02-04 09:41:25.393594 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393602 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393610 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393618 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393626 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393634 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393642 | orchestrator | 2025-02-04 09:41:25.393653 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.393662 | orchestrator | Tuesday 04 February 2025 09:29:02 +0000 (0:00:01.074) 0:01:47.906 ****** 2025-02-04 09:41:25.393670 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393678 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393686 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393694 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.393702 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.393710 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.393718 | orchestrator | 2025-02-04 09:41:25.393726 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.393734 | orchestrator | Tuesday 04 February 2025 09:29:03 +0000 (0:00:00.817) 0:01:48.723 ****** 2025-02-04 09:41:25.393742 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.393750 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.393758 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.393766 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.393774 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.393782 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.393790 | orchestrator | 2025-02-04 09:41:25.393798 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.393806 | orchestrator | Tuesday 04 February 2025 09:29:04 +0000 (0:00:01.031) 0:01:49.755 ****** 2025-02-04 09:41:25.393815 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393823 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393831 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393839 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393847 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393855 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393863 | orchestrator | 2025-02-04 09:41:25.393871 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.393891 | orchestrator | Tuesday 04 February 2025 09:29:05 +0000 (0:00:00.874) 0:01:50.629 ****** 2025-02-04 09:41:25.393900 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393913 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393921 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.393929 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.393937 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.393945 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.393953 | orchestrator | 2025-02-04 09:41:25.393961 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.393969 | orchestrator | Tuesday 04 February 2025 09:29:06 +0000 (0:00:01.238) 0:01:51.868 ****** 2025-02-04 09:41:25.393977 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.393986 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.393994 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394002 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394009 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394036 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394045 | orchestrator | 2025-02-04 09:41:25.394053 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.394061 | orchestrator | Tuesday 04 February 2025 09:29:07 +0000 (0:00:00.867) 0:01:52.735 ****** 2025-02-04 09:41:25.394069 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394077 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394085 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394093 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394101 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394109 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394117 | orchestrator | 2025-02-04 09:41:25.394125 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.394134 | orchestrator | Tuesday 04 February 2025 09:29:08 +0000 (0:00:00.999) 0:01:53.735 ****** 2025-02-04 09:41:25.394142 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394149 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394175 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394183 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394191 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394199 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394207 | orchestrator | 2025-02-04 09:41:25.394215 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.394224 | orchestrator | Tuesday 04 February 2025 09:29:08 +0000 (0:00:00.653) 0:01:54.389 ****** 2025-02-04 09:41:25.394280 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394291 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394300 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394308 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394316 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394324 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394332 | orchestrator | 2025-02-04 09:41:25.394340 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.394348 | orchestrator | Tuesday 04 February 2025 09:29:09 +0000 (0:00:00.870) 0:01:55.259 ****** 2025-02-04 09:41:25.394356 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394365 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394373 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394381 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394389 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394397 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394405 | orchestrator | 2025-02-04 09:41:25.394413 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.394421 | orchestrator | Tuesday 04 February 2025 09:29:10 +0000 (0:00:00.740) 0:01:56.000 ****** 2025-02-04 09:41:25.394430 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394438 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394446 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394454 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394467 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394475 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394483 | orchestrator | 2025-02-04 09:41:25.394492 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.394500 | orchestrator | Tuesday 04 February 2025 09:29:11 +0000 (0:00:00.823) 0:01:56.824 ****** 2025-02-04 09:41:25.394508 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394516 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394524 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394532 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394540 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394548 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394556 | orchestrator | 2025-02-04 09:41:25.394564 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.394573 | orchestrator | Tuesday 04 February 2025 09:29:11 +0000 (0:00:00.675) 0:01:57.499 ****** 2025-02-04 09:41:25.394581 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394589 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394597 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394605 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394613 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394621 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394629 | orchestrator | 2025-02-04 09:41:25.394637 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.394645 | orchestrator | Tuesday 04 February 2025 09:29:12 +0000 (0:00:00.885) 0:01:58.385 ****** 2025-02-04 09:41:25.394654 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394662 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394670 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394678 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394686 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394694 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394702 | orchestrator | 2025-02-04 09:41:25.394710 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.394718 | orchestrator | Tuesday 04 February 2025 09:29:13 +0000 (0:00:00.623) 0:01:59.008 ****** 2025-02-04 09:41:25.394726 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394734 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394742 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394750 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394758 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394766 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394774 | orchestrator | 2025-02-04 09:41:25.394782 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.394791 | orchestrator | Tuesday 04 February 2025 09:29:14 +0000 (0:00:00.817) 0:01:59.826 ****** 2025-02-04 09:41:25.394799 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.394807 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.394815 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.394823 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.394831 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.394839 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.394847 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.394855 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.394863 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.394871 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.394879 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.394888 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.394896 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.394908 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.394917 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.394931 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.394940 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.394949 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.394958 | orchestrator | 2025-02-04 09:41:25.394967 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.394977 | orchestrator | Tuesday 04 February 2025 09:29:15 +0000 (0:00:00.793) 0:02:00.620 ****** 2025-02-04 09:41:25.394987 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-04 09:41:25.394996 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-04 09:41:25.395005 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395014 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-04 09:41:25.395066 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-04 09:41:25.395078 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395088 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-04 09:41:25.395097 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-04 09:41:25.395105 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395113 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-04 09:41:25.395121 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-04 09:41:25.395129 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395137 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-04 09:41:25.395145 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-04 09:41:25.395193 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395202 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-04 09:41:25.395210 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-04 09:41:25.395218 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395226 | orchestrator | 2025-02-04 09:41:25.395235 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.395243 | orchestrator | Tuesday 04 February 2025 09:29:15 +0000 (0:00:00.837) 0:02:01.457 ****** 2025-02-04 09:41:25.395251 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395259 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395267 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395275 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395283 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395291 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395299 | orchestrator | 2025-02-04 09:41:25.395307 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.395316 | orchestrator | Tuesday 04 February 2025 09:29:16 +0000 (0:00:00.551) 0:02:02.009 ****** 2025-02-04 09:41:25.395324 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395332 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395340 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395348 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395356 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395364 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395372 | orchestrator | 2025-02-04 09:41:25.395380 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.395393 | orchestrator | Tuesday 04 February 2025 09:29:17 +0000 (0:00:00.752) 0:02:02.761 ****** 2025-02-04 09:41:25.395402 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395410 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395418 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395484 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395493 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395524 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395531 | orchestrator | 2025-02-04 09:41:25.395538 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.395545 | orchestrator | Tuesday 04 February 2025 09:29:17 +0000 (0:00:00.643) 0:02:03.404 ****** 2025-02-04 09:41:25.395552 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395559 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395566 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395573 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395581 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395588 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395595 | orchestrator | 2025-02-04 09:41:25.395602 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.395609 | orchestrator | Tuesday 04 February 2025 09:29:18 +0000 (0:00:00.728) 0:02:04.133 ****** 2025-02-04 09:41:25.395616 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395623 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395630 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395638 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395645 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395652 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395658 | orchestrator | 2025-02-04 09:41:25.395666 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.395673 | orchestrator | Tuesday 04 February 2025 09:29:19 +0000 (0:00:00.692) 0:02:04.825 ****** 2025-02-04 09:41:25.395680 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395687 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395694 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395701 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395708 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395715 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.395722 | orchestrator | 2025-02-04 09:41:25.395729 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.395736 | orchestrator | Tuesday 04 February 2025 09:29:20 +0000 (0:00:01.072) 0:02:05.898 ****** 2025-02-04 09:41:25.395743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.395751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.395758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.395765 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395772 | orchestrator | 2025-02-04 09:41:25.395779 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.395786 | orchestrator | Tuesday 04 February 2025 09:29:20 +0000 (0:00:00.580) 0:02:06.479 ****** 2025-02-04 09:41:25.395793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.395800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.395807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.395814 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395821 | orchestrator | 2025-02-04 09:41:25.395828 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.395890 | orchestrator | Tuesday 04 February 2025 09:29:21 +0000 (0:00:00.429) 0:02:06.908 ****** 2025-02-04 09:41:25.395901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.395908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.395916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.395923 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395935 | orchestrator | 2025-02-04 09:41:25.395942 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.395950 | orchestrator | Tuesday 04 February 2025 09:29:21 +0000 (0:00:00.437) 0:02:07.345 ****** 2025-02-04 09:41:25.395957 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.395969 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.395977 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.395984 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.395991 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.395999 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396010 | orchestrator | 2025-02-04 09:41:25.396018 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.396026 | orchestrator | Tuesday 04 February 2025 09:29:22 +0000 (0:00:00.768) 0:02:08.114 ****** 2025-02-04 09:41:25.396033 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.396041 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396049 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.396057 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396064 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.396072 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396079 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.396086 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396094 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.396101 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396108 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.396116 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396123 | orchestrator | 2025-02-04 09:41:25.396130 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.396138 | orchestrator | Tuesday 04 February 2025 09:29:24 +0000 (0:00:01.520) 0:02:09.635 ****** 2025-02-04 09:41:25.396145 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396166 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396176 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396188 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396199 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396210 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396221 | orchestrator | 2025-02-04 09:41:25.396231 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.396242 | orchestrator | Tuesday 04 February 2025 09:29:24 +0000 (0:00:00.637) 0:02:10.273 ****** 2025-02-04 09:41:25.396253 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396264 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396274 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396286 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396296 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396306 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396316 | orchestrator | 2025-02-04 09:41:25.396328 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.396340 | orchestrator | Tuesday 04 February 2025 09:29:25 +0000 (0:00:00.829) 0:02:11.102 ****** 2025-02-04 09:41:25.396352 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.396364 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396376 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.396389 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396405 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.396417 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396428 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.396439 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396450 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.396460 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396472 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.396482 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396489 | orchestrator | 2025-02-04 09:41:25.396497 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.396514 | orchestrator | Tuesday 04 February 2025 09:29:26 +0000 (0:00:00.647) 0:02:11.749 ****** 2025-02-04 09:41:25.396522 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.396531 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396539 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.396547 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396556 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.396563 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396571 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396579 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396587 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396595 | orchestrator | 2025-02-04 09:41:25.396603 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.396611 | orchestrator | Tuesday 04 February 2025 09:29:26 +0000 (0:00:00.708) 0:02:12.458 ****** 2025-02-04 09:41:25.396619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.396627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.396635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.396643 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.396725 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.396736 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.396745 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396753 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.396760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.396767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.396774 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.396789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.396796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.396803 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396810 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-04 09:41:25.396818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-04 09:41:25.396825 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-04 09:41:25.396832 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-04 09:41:25.396850 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-04 09:41:25.396861 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-04 09:41:25.396872 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396882 | orchestrator | 2025-02-04 09:41:25.396893 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.396902 | orchestrator | Tuesday 04 February 2025 09:29:28 +0000 (0:00:01.562) 0:02:14.021 ****** 2025-02-04 09:41:25.396913 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.396925 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.396935 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.396946 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.396957 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.396967 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.396977 | orchestrator | 2025-02-04 09:41:25.396987 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.397005 | orchestrator | Tuesday 04 February 2025 09:29:29 +0000 (0:00:01.302) 0:02:15.323 ****** 2025-02-04 09:41:25.397015 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.397031 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.397042 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.397053 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.397063 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.397074 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.397084 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.397095 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.397105 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.397115 | orchestrator | 2025-02-04 09:41:25.397126 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.397206 | orchestrator | Tuesday 04 February 2025 09:29:31 +0000 (0:00:01.346) 0:02:16.669 ****** 2025-02-04 09:41:25.397219 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.397231 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.397247 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.397258 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.397268 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.397280 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.397290 | orchestrator | 2025-02-04 09:41:25.397304 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.397315 | orchestrator | Tuesday 04 February 2025 09:29:32 +0000 (0:00:01.511) 0:02:18.180 ****** 2025-02-04 09:41:25.397325 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.397336 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.397347 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.397357 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.397368 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.397379 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.397389 | orchestrator | 2025-02-04 09:41:25.397400 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-02-04 09:41:25.397410 | orchestrator | Tuesday 04 February 2025 09:29:34 +0000 (0:00:01.436) 0:02:19.617 ****** 2025-02-04 09:41:25.397421 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.397432 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.397443 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.397454 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.397465 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.397476 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.397487 | orchestrator | 2025-02-04 09:41:25.397498 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-02-04 09:41:25.397508 | orchestrator | Tuesday 04 February 2025 09:29:36 +0000 (0:00:02.029) 0:02:21.647 ****** 2025-02-04 09:41:25.397526 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.397537 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.397549 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.397560 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.397571 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.397583 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.397595 | orchestrator | 2025-02-04 09:41:25.397606 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-02-04 09:41:25.397618 | orchestrator | Tuesday 04 February 2025 09:29:38 +0000 (0:00:02.846) 0:02:24.494 ****** 2025-02-04 09:41:25.397630 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.397644 | orchestrator | 2025-02-04 09:41:25.397656 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-02-04 09:41:25.397667 | orchestrator | Tuesday 04 February 2025 09:29:40 +0000 (0:00:01.296) 0:02:25.790 ****** 2025-02-04 09:41:25.397786 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.397801 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.397811 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.397820 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.397829 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.397838 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.397850 | orchestrator | 2025-02-04 09:41:25.397865 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-02-04 09:41:25.397875 | orchestrator | Tuesday 04 February 2025 09:29:41 +0000 (0:00:00.881) 0:02:26.671 ****** 2025-02-04 09:41:25.397884 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.397900 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.397910 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.397920 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.397930 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.397942 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.397952 | orchestrator | 2025-02-04 09:41:25.397962 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-02-04 09:41:25.397972 | orchestrator | Tuesday 04 February 2025 09:29:42 +0000 (0:00:00.961) 0:02:27.633 ****** 2025-02-04 09:41:25.397982 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-04 09:41:25.397992 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-04 09:41:25.398002 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-04 09:41:25.398012 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-04 09:41:25.398046 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-04 09:41:25.398054 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-04 09:41:25.398061 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-04 09:41:25.398067 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-04 09:41:25.398074 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-04 09:41:25.398080 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-02-04 09:41:25.398086 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-04 09:41:25.398092 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-02-04 09:41:25.398099 | orchestrator | 2025-02-04 09:41:25.398105 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-02-04 09:41:25.398111 | orchestrator | Tuesday 04 February 2025 09:29:43 +0000 (0:00:01.892) 0:02:29.526 ****** 2025-02-04 09:41:25.398117 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.398124 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.398130 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.398136 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.398142 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.398149 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.398168 | orchestrator | 2025-02-04 09:41:25.398175 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-02-04 09:41:25.398181 | orchestrator | Tuesday 04 February 2025 09:29:45 +0000 (0:00:01.379) 0:02:30.905 ****** 2025-02-04 09:41:25.398187 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.398193 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.398199 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.398206 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.398212 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.398218 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.398224 | orchestrator | 2025-02-04 09:41:25.398237 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-02-04 09:41:25.398256 | orchestrator | Tuesday 04 February 2025 09:29:46 +0000 (0:00:01.504) 0:02:32.410 ****** 2025-02-04 09:41:25.398266 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.398276 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.398286 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.398295 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.398305 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.398318 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.398327 | orchestrator | 2025-02-04 09:41:25.398390 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-02-04 09:41:25.398402 | orchestrator | Tuesday 04 February 2025 09:29:47 +0000 (0:00:00.717) 0:02:33.128 ****** 2025-02-04 09:41:25.398413 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.398424 | orchestrator | 2025-02-04 09:41:25.398434 | orchestrator | TASK [ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy image] *** 2025-02-04 09:41:25.398445 | orchestrator | Tuesday 04 February 2025 09:29:49 +0000 (0:00:01.785) 0:02:34.913 ****** 2025-02-04 09:41:25.398455 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.398465 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.398475 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.398485 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.398495 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.398505 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.398516 | orchestrator | 2025-02-04 09:41:25.398526 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-02-04 09:41:25.398537 | orchestrator | Tuesday 04 February 2025 09:30:12 +0000 (0:00:22.664) 0:02:57.578 ****** 2025-02-04 09:41:25.398547 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-04 09:41:25.398558 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-04 09:41:25.398665 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-04 09:41:25.398677 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.398684 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-04 09:41:25.398692 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-04 09:41:25.398699 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-04 09:41:25.398706 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.398714 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-04 09:41:25.398722 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-04 09:41:25.398729 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-04 09:41:25.398736 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.398743 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-04 09:41:25.398750 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-04 09:41:25.398757 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-04 09:41:25.398764 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.398771 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-04 09:41:25.398777 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-04 09:41:25.398783 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-04 09:41:25.398789 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.398796 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-02-04 09:41:25.398802 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-02-04 09:41:25.398817 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-02-04 09:41:25.398823 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.398830 | orchestrator | 2025-02-04 09:41:25.398870 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-02-04 09:41:25.398877 | orchestrator | Tuesday 04 February 2025 09:30:12 +0000 (0:00:00.894) 0:02:58.472 ****** 2025-02-04 09:41:25.398883 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.398889 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.398896 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.398902 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.398908 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.398915 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.398921 | orchestrator | 2025-02-04 09:41:25.398927 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-02-04 09:41:25.398934 | orchestrator | Tuesday 04 February 2025 09:30:13 +0000 (0:00:00.772) 0:02:59.244 ****** 2025-02-04 09:41:25.398940 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.398946 | orchestrator | 2025-02-04 09:41:25.398953 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-02-04 09:41:25.398959 | orchestrator | Tuesday 04 February 2025 09:30:13 +0000 (0:00:00.127) 0:02:59.372 ****** 2025-02-04 09:41:25.398965 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.398971 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.398978 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.398984 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.398990 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.398997 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399003 | orchestrator | 2025-02-04 09:41:25.399009 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-02-04 09:41:25.399021 | orchestrator | Tuesday 04 February 2025 09:30:14 +0000 (0:00:00.897) 0:03:00.269 ****** 2025-02-04 09:41:25.399027 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399034 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399040 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399046 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399053 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399059 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399066 | orchestrator | 2025-02-04 09:41:25.399072 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-02-04 09:41:25.399078 | orchestrator | Tuesday 04 February 2025 09:30:15 +0000 (0:00:00.675) 0:03:00.944 ****** 2025-02-04 09:41:25.399085 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399091 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399131 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399137 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399144 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399162 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399169 | orchestrator | 2025-02-04 09:41:25.399176 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-02-04 09:41:25.399182 | orchestrator | Tuesday 04 February 2025 09:30:16 +0000 (0:00:00.917) 0:03:01.862 ****** 2025-02-04 09:41:25.399188 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.399195 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.399201 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.399207 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.399214 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.399220 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.399226 | orchestrator | 2025-02-04 09:41:25.399232 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-02-04 09:41:25.399239 | orchestrator | Tuesday 04 February 2025 09:30:19 +0000 (0:00:03.692) 0:03:05.555 ****** 2025-02-04 09:41:25.399245 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.399256 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.399263 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.399269 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.399276 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.399282 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.399288 | orchestrator | 2025-02-04 09:41:25.399295 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-02-04 09:41:25.399352 | orchestrator | Tuesday 04 February 2025 09:30:20 +0000 (0:00:00.871) 0:03:06.426 ****** 2025-02-04 09:41:25.399362 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.399371 | orchestrator | 2025-02-04 09:41:25.399377 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-02-04 09:41:25.399384 | orchestrator | Tuesday 04 February 2025 09:30:22 +0000 (0:00:01.314) 0:03:07.741 ****** 2025-02-04 09:41:25.399391 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399401 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399408 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399415 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399421 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399429 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399435 | orchestrator | 2025-02-04 09:41:25.399442 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-02-04 09:41:25.399449 | orchestrator | Tuesday 04 February 2025 09:30:23 +0000 (0:00:00.862) 0:03:08.603 ****** 2025-02-04 09:41:25.399456 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399462 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399469 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399476 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399482 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399489 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399496 | orchestrator | 2025-02-04 09:41:25.399502 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-02-04 09:41:25.399509 | orchestrator | Tuesday 04 February 2025 09:30:24 +0000 (0:00:01.056) 0:03:09.659 ****** 2025-02-04 09:41:25.399516 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399522 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399529 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399536 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399542 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399549 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399556 | orchestrator | 2025-02-04 09:41:25.399562 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-02-04 09:41:25.399569 | orchestrator | Tuesday 04 February 2025 09:30:24 +0000 (0:00:00.770) 0:03:10.430 ****** 2025-02-04 09:41:25.399576 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399582 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399589 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399596 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399602 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399609 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399615 | orchestrator | 2025-02-04 09:41:25.399622 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-02-04 09:41:25.399629 | orchestrator | Tuesday 04 February 2025 09:30:26 +0000 (0:00:01.453) 0:03:11.884 ****** 2025-02-04 09:41:25.399635 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399642 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399649 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399655 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399662 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399669 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399675 | orchestrator | 2025-02-04 09:41:25.399682 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-02-04 09:41:25.399693 | orchestrator | Tuesday 04 February 2025 09:30:27 +0000 (0:00:01.170) 0:03:13.054 ****** 2025-02-04 09:41:25.399700 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399706 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399713 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399720 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399726 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399733 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399740 | orchestrator | 2025-02-04 09:41:25.399746 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-02-04 09:41:25.399753 | orchestrator | Tuesday 04 February 2025 09:30:28 +0000 (0:00:01.367) 0:03:14.421 ****** 2025-02-04 09:41:25.399760 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.399766 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.399773 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.399780 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.399786 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.399793 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.399800 | orchestrator | 2025-02-04 09:41:25.399806 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-02-04 09:41:25.399813 | orchestrator | Tuesday 04 February 2025 09:30:29 +0000 (0:00:00.960) 0:03:15.382 ****** 2025-02-04 09:41:25.399820 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.399826 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.399833 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.399840 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.399846 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.399853 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.399859 | orchestrator | 2025-02-04 09:41:25.399866 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.399873 | orchestrator | Tuesday 04 February 2025 09:30:31 +0000 (0:00:02.066) 0:03:17.449 ****** 2025-02-04 09:41:25.399880 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.399887 | orchestrator | 2025-02-04 09:41:25.399894 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-02-04 09:41:25.399901 | orchestrator | Tuesday 04 February 2025 09:30:33 +0000 (0:00:01.961) 0:03:19.410 ****** 2025-02-04 09:41:25.399907 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-02-04 09:41:25.399914 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-02-04 09:41:25.399921 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-02-04 09:41:25.399927 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-02-04 09:41:25.399969 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-02-04 09:41:25.399979 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-02-04 09:41:25.399986 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-02-04 09:41:25.399993 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-02-04 09:41:25.399999 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-02-04 09:41:25.400006 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-02-04 09:41:25.400013 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-02-04 09:41:25.400019 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-02-04 09:41:25.400026 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-02-04 09:41:25.400033 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-02-04 09:41:25.400039 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-02-04 09:41:25.400046 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-02-04 09:41:25.400053 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-02-04 09:41:25.400063 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-02-04 09:41:25.400070 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-02-04 09:41:25.400076 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-02-04 09:41:25.400083 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-02-04 09:41:25.400089 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-02-04 09:41:25.400096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-02-04 09:41:25.400103 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-02-04 09:41:25.400109 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-02-04 09:41:25.400116 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-02-04 09:41:25.400122 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-02-04 09:41:25.400129 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-02-04 09:41:25.400136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-02-04 09:41:25.400142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-02-04 09:41:25.400162 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-04 09:41:25.400170 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-02-04 09:41:25.400177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-02-04 09:41:25.400183 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-02-04 09:41:25.400190 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-04 09:41:25.400196 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-02-04 09:41:25.400203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-02-04 09:41:25.400210 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-04 09:41:25.400216 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-02-04 09:41:25.400223 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-02-04 09:41:25.400229 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-04 09:41:25.400236 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-02-04 09:41:25.400242 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-02-04 09:41:25.400249 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-04 09:41:25.400256 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-04 09:41:25.400262 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-02-04 09:41:25.400269 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-04 09:41:25.400276 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-04 09:41:25.400282 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-04 09:41:25.400289 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-02-04 09:41:25.400295 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-04 09:41:25.400302 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-02-04 09:41:25.400309 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-04 09:41:25.400315 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-04 09:41:25.400322 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-04 09:41:25.400328 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-04 09:41:25.400335 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-04 09:41:25.400342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-02-04 09:41:25.400348 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-04 09:41:25.400372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-04 09:41:25.400379 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-04 09:41:25.400385 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-04 09:41:25.400392 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-04 09:41:25.400398 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-02-04 09:41:25.400440 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-04 09:41:25.400450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-02-04 09:41:25.400456 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-04 09:41:25.400463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-04 09:41:25.400470 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-04 09:41:25.400477 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-02-04 09:41:25.400484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-04 09:41:25.400494 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-02-04 09:41:25.400501 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-04 09:41:25.400508 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-02-04 09:41:25.400518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-04 09:41:25.400526 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-04 09:41:25.400532 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-02-04 09:41:25.400539 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-02-04 09:41:25.400546 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-02-04 09:41:25.400552 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-02-04 09:41:25.400559 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-04 09:41:25.400566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-02-04 09:41:25.400573 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-02-04 09:41:25.400579 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-02-04 09:41:25.400586 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-04 09:41:25.400593 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-02-04 09:41:25.400599 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-02-04 09:41:25.400606 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-02-04 09:41:25.400613 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-02-04 09:41:25.400620 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-02-04 09:41:25.400627 | orchestrator | 2025-02-04 09:41:25.400633 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.400640 | orchestrator | Tuesday 04 February 2025 09:30:41 +0000 (0:00:07.825) 0:03:27.236 ****** 2025-02-04 09:41:25.400647 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.400653 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.400660 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.400667 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.400674 | orchestrator | 2025-02-04 09:41:25.400681 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-02-04 09:41:25.400688 | orchestrator | Tuesday 04 February 2025 09:30:43 +0000 (0:00:01.645) 0:03:28.882 ****** 2025-02-04 09:41:25.400694 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.400706 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.400713 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.400720 | orchestrator | 2025-02-04 09:41:25.400726 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-02-04 09:41:25.400733 | orchestrator | Tuesday 04 February 2025 09:30:44 +0000 (0:00:01.365) 0:03:30.247 ****** 2025-02-04 09:41:25.400740 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.400746 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.400753 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.400760 | orchestrator | 2025-02-04 09:41:25.400766 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.400773 | orchestrator | Tuesday 04 February 2025 09:30:46 +0000 (0:00:01.768) 0:03:32.015 ****** 2025-02-04 09:41:25.400780 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.400787 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.400797 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.400805 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.400811 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.400818 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.400824 | orchestrator | 2025-02-04 09:41:25.400831 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.400838 | orchestrator | Tuesday 04 February 2025 09:30:47 +0000 (0:00:01.059) 0:03:33.075 ****** 2025-02-04 09:41:25.400844 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.400851 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.400858 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.400864 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.400871 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.400909 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.400918 | orchestrator | 2025-02-04 09:41:25.400925 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.400932 | orchestrator | Tuesday 04 February 2025 09:30:48 +0000 (0:00:00.779) 0:03:33.855 ****** 2025-02-04 09:41:25.400938 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.400945 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.400952 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.400958 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.400965 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.400971 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.400978 | orchestrator | 2025-02-04 09:41:25.400984 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.400991 | orchestrator | Tuesday 04 February 2025 09:30:49 +0000 (0:00:01.126) 0:03:34.981 ****** 2025-02-04 09:41:25.400998 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401004 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401011 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401017 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401024 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401030 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401041 | orchestrator | 2025-02-04 09:41:25.401048 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.401055 | orchestrator | Tuesday 04 February 2025 09:30:50 +0000 (0:00:01.026) 0:03:36.008 ****** 2025-02-04 09:41:25.401061 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401068 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401078 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401085 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401091 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401101 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401107 | orchestrator | 2025-02-04 09:41:25.401114 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.401121 | orchestrator | Tuesday 04 February 2025 09:30:51 +0000 (0:00:01.369) 0:03:37.377 ****** 2025-02-04 09:41:25.401128 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401134 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401141 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401147 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401191 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401198 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401204 | orchestrator | 2025-02-04 09:41:25.401210 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.401217 | orchestrator | Tuesday 04 February 2025 09:30:52 +0000 (0:00:00.912) 0:03:38.290 ****** 2025-02-04 09:41:25.401223 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401230 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401236 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401242 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401261 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401267 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401273 | orchestrator | 2025-02-04 09:41:25.401279 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.401285 | orchestrator | Tuesday 04 February 2025 09:30:53 +0000 (0:00:00.920) 0:03:39.211 ****** 2025-02-04 09:41:25.401292 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401297 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401303 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401309 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401315 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401322 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401328 | orchestrator | 2025-02-04 09:41:25.401334 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.401340 | orchestrator | Tuesday 04 February 2025 09:30:54 +0000 (0:00:00.840) 0:03:40.051 ****** 2025-02-04 09:41:25.401346 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401352 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401358 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401364 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.401373 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.401379 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.401385 | orchestrator | 2025-02-04 09:41:25.401391 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.401397 | orchestrator | Tuesday 04 February 2025 09:30:56 +0000 (0:00:01.725) 0:03:41.776 ****** 2025-02-04 09:41:25.401403 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.401409 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.401415 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.401421 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401426 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401432 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401438 | orchestrator | 2025-02-04 09:41:25.401444 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.401450 | orchestrator | Tuesday 04 February 2025 09:30:57 +0000 (0:00:00.817) 0:03:42.593 ****** 2025-02-04 09:41:25.401456 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.401462 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.401468 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401478 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.401484 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.401490 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401496 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.401502 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.401508 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401514 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.401520 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.401526 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401532 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.401538 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.401544 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401588 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.401597 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.401605 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401612 | orchestrator | 2025-02-04 09:41:25.401619 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.401626 | orchestrator | Tuesday 04 February 2025 09:30:57 +0000 (0:00:00.958) 0:03:43.551 ****** 2025-02-04 09:41:25.401637 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-02-04 09:41:25.401644 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-02-04 09:41:25.401651 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-02-04 09:41:25.401658 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-02-04 09:41:25.401665 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-02-04 09:41:25.401672 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-02-04 09:41:25.401679 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-04 09:41:25.401686 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-04 09:41:25.401693 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401700 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-04 09:41:25.401707 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-04 09:41:25.401714 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401721 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-04 09:41:25.401728 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-04 09:41:25.401735 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401742 | orchestrator | 2025-02-04 09:41:25.401748 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.401754 | orchestrator | Tuesday 04 February 2025 09:30:58 +0000 (0:00:00.864) 0:03:44.416 ****** 2025-02-04 09:41:25.401761 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.401767 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.401774 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.401780 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401786 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401793 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401799 | orchestrator | 2025-02-04 09:41:25.401808 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.401814 | orchestrator | Tuesday 04 February 2025 09:30:59 +0000 (0:00:00.889) 0:03:45.306 ****** 2025-02-04 09:41:25.401821 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401827 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401833 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401840 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401846 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401852 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401859 | orchestrator | 2025-02-04 09:41:25.401865 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.401875 | orchestrator | Tuesday 04 February 2025 09:31:00 +0000 (0:00:00.868) 0:03:46.174 ****** 2025-02-04 09:41:25.401882 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401888 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401894 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401901 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401907 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401913 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401920 | orchestrator | 2025-02-04 09:41:25.401926 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.401933 | orchestrator | Tuesday 04 February 2025 09:31:01 +0000 (0:00:00.985) 0:03:47.159 ****** 2025-02-04 09:41:25.401939 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.401945 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.401951 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.401958 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.401964 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.401970 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.401976 | orchestrator | 2025-02-04 09:41:25.401983 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.401989 | orchestrator | Tuesday 04 February 2025 09:31:02 +0000 (0:00:00.806) 0:03:47.965 ****** 2025-02-04 09:41:25.401995 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402002 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.402008 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.402030 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402038 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402044 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402051 | orchestrator | 2025-02-04 09:41:25.402057 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.402063 | orchestrator | Tuesday 04 February 2025 09:31:03 +0000 (0:00:01.307) 0:03:49.272 ****** 2025-02-04 09:41:25.402070 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.402076 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.402082 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.402088 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402097 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402104 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402110 | orchestrator | 2025-02-04 09:41:25.402117 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.402123 | orchestrator | Tuesday 04 February 2025 09:31:04 +0000 (0:00:01.279) 0:03:50.552 ****** 2025-02-04 09:41:25.402129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.402135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.402142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.402148 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402166 | orchestrator | 2025-02-04 09:41:25.402173 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.402179 | orchestrator | Tuesday 04 February 2025 09:31:05 +0000 (0:00:00.544) 0:03:51.096 ****** 2025-02-04 09:41:25.402221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.402230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.402236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.402243 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402249 | orchestrator | 2025-02-04 09:41:25.402255 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.402262 | orchestrator | Tuesday 04 February 2025 09:31:05 +0000 (0:00:00.460) 0:03:51.557 ****** 2025-02-04 09:41:25.402268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.402275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.402286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.402292 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402299 | orchestrator | 2025-02-04 09:41:25.402305 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.402311 | orchestrator | Tuesday 04 February 2025 09:31:06 +0000 (0:00:00.833) 0:03:52.390 ****** 2025-02-04 09:41:25.402318 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.402324 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.402330 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.402337 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402343 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402349 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402355 | orchestrator | 2025-02-04 09:41:25.402362 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.402368 | orchestrator | Tuesday 04 February 2025 09:31:07 +0000 (0:00:01.145) 0:03:53.536 ****** 2025-02-04 09:41:25.402374 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-04 09:41:25.402381 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-04 09:41:25.402387 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-04 09:41:25.402393 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.402399 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402406 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.402412 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402418 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.402425 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402431 | orchestrator | 2025-02-04 09:41:25.402437 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.402443 | orchestrator | Tuesday 04 February 2025 09:31:09 +0000 (0:00:01.164) 0:03:54.701 ****** 2025-02-04 09:41:25.402450 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402456 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.402462 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.402468 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402475 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402481 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402487 | orchestrator | 2025-02-04 09:41:25.402493 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.402500 | orchestrator | Tuesday 04 February 2025 09:31:10 +0000 (0:00:01.187) 0:03:55.888 ****** 2025-02-04 09:41:25.402506 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402512 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.402519 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.402525 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402531 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402537 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402544 | orchestrator | 2025-02-04 09:41:25.402550 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.402557 | orchestrator | Tuesday 04 February 2025 09:31:11 +0000 (0:00:00.848) 0:03:56.736 ****** 2025-02-04 09:41:25.402563 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.402569 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402576 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.402582 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.402588 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.402595 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.402601 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.402607 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.402614 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402620 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402626 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.402639 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402645 | orchestrator | 2025-02-04 09:41:25.402651 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.402658 | orchestrator | Tuesday 04 February 2025 09:31:13 +0000 (0:00:01.837) 0:03:58.573 ****** 2025-02-04 09:41:25.402664 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.402679 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402686 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.402692 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.402698 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.402705 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.402711 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402717 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402724 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402730 | orchestrator | 2025-02-04 09:41:25.402736 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.402743 | orchestrator | Tuesday 04 February 2025 09:31:14 +0000 (0:00:01.084) 0:03:59.658 ****** 2025-02-04 09:41:25.402749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.402790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.402798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.402804 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.402810 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.402816 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.402822 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.402828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.402834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.402840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.402846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.402852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.402858 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.402864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-04 09:41:25.402870 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-04 09:41:25.402875 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-04 09:41:25.402881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.402887 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.402894 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.402900 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.402906 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-04 09:41:25.402915 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-04 09:41:25.402921 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-04 09:41:25.402928 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.402934 | orchestrator | 2025-02-04 09:41:25.402940 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.402947 | orchestrator | Tuesday 04 February 2025 09:31:16 +0000 (0:00:02.294) 0:04:01.953 ****** 2025-02-04 09:41:25.402953 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.402959 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.402966 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.402972 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.402986 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.402992 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.402999 | orchestrator | 2025-02-04 09:41:25.403005 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-04 09:41:25.403014 | orchestrator | Tuesday 04 February 2025 09:31:22 +0000 (0:00:06.461) 0:04:08.414 ****** 2025-02-04 09:41:25.403020 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.403027 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.403033 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.403039 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.403046 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.403052 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.403058 | orchestrator | 2025-02-04 09:41:25.403064 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-04 09:41:25.403071 | orchestrator | Tuesday 04 February 2025 09:31:23 +0000 (0:00:01.024) 0:04:09.438 ****** 2025-02-04 09:41:25.403077 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403083 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.403090 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.403096 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.403103 | orchestrator | 2025-02-04 09:41:25.403109 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-04 09:41:25.403116 | orchestrator | Tuesday 04 February 2025 09:31:25 +0000 (0:00:01.131) 0:04:10.570 ****** 2025-02-04 09:41:25.403122 | orchestrator | 2025-02-04 09:41:25.403128 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-02-04 09:41:25.403135 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.403141 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.403147 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.403167 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.403173 | orchestrator | 2025-02-04 09:41:25.403179 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-02-04 09:41:25.403185 | orchestrator | Tuesday 04 February 2025 09:31:26 +0000 (0:00:01.313) 0:04:11.884 ****** 2025-02-04 09:41:25.403191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.403197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.403203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.403209 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403215 | orchestrator | 2025-02-04 09:41:25.403221 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-02-04 09:41:25.403227 | orchestrator | Tuesday 04 February 2025 09:31:26 +0000 (0:00:00.535) 0:04:12.420 ****** 2025-02-04 09:41:25.403233 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403238 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.403244 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.403250 | orchestrator | 2025-02-04 09:41:25.403256 | orchestrator | TASK [ceph-handler : set _osd_handler_called before restart] ******************* 2025-02-04 09:41:25.403262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.403268 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.403274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.403280 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.403286 | orchestrator | 2025-02-04 09:41:25.403292 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-02-04 09:41:25.403298 | orchestrator | Tuesday 04 February 2025 09:31:28 +0000 (0:00:01.304) 0:04:13.724 ****** 2025-02-04 09:41:25.403338 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403347 | orchestrator | 2025-02-04 09:41:25.403353 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-04 09:41:25.403363 | orchestrator | Tuesday 04 February 2025 09:31:28 +0000 (0:00:00.275) 0:04:13.999 ****** 2025-02-04 09:41:25.403369 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403375 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.403381 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.403387 | orchestrator | 2025-02-04 09:41:25.403392 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-04 09:41:25.403398 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.403404 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.403410 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.403416 | orchestrator | 2025-02-04 09:41:25.403422 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-02-04 09:41:25.403428 | orchestrator | Tuesday 04 February 2025 09:31:29 +0000 (0:00:01.122) 0:04:15.122 ****** 2025-02-04 09:41:25.403434 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403440 | orchestrator | 2025-02-04 09:41:25.403446 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-02-04 09:41:25.403452 | orchestrator | Tuesday 04 February 2025 09:31:29 +0000 (0:00:00.265) 0:04:15.388 ****** 2025-02-04 09:41:25.403458 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403464 | orchestrator | 2025-02-04 09:41:25.403470 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-04 09:41:25.403476 | orchestrator | Tuesday 04 February 2025 09:31:30 +0000 (0:00:00.294) 0:04:15.682 ****** 2025-02-04 09:41:25.403482 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403488 | orchestrator | 2025-02-04 09:41:25.403494 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-02-04 09:41:25.403500 | orchestrator | Tuesday 04 February 2025 09:31:30 +0000 (0:00:00.138) 0:04:15.821 ****** 2025-02-04 09:41:25.403505 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403511 | orchestrator | 2025-02-04 09:41:25.403517 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-02-04 09:41:25.403523 | orchestrator | Tuesday 04 February 2025 09:31:30 +0000 (0:00:00.333) 0:04:16.155 ****** 2025-02-04 09:41:25.403529 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403535 | orchestrator | 2025-02-04 09:41:25.403541 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-02-04 09:41:25.403547 | orchestrator | Tuesday 04 February 2025 09:31:30 +0000 (0:00:00.248) 0:04:16.403 ****** 2025-02-04 09:41:25.403553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.403559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.403564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.403570 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403576 | orchestrator | 2025-02-04 09:41:25.403582 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-02-04 09:41:25.403588 | orchestrator | Tuesday 04 February 2025 09:31:31 +0000 (0:00:00.487) 0:04:16.890 ****** 2025-02-04 09:41:25.403594 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403600 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.403606 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.403612 | orchestrator | 2025-02-04 09:41:25.403618 | orchestrator | TASK [ceph-handler : set _osd_handler_called after restart] ******************** 2025-02-04 09:41:25.403623 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.403629 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.403635 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.403641 | orchestrator | 2025-02-04 09:41:25.403647 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-02-04 09:41:25.403656 | orchestrator | Tuesday 04 February 2025 09:31:32 +0000 (0:00:01.188) 0:04:18.079 ****** 2025-02-04 09:41:25.403662 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403668 | orchestrator | 2025-02-04 09:41:25.403674 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-02-04 09:41:25.403684 | orchestrator | Tuesday 04 February 2025 09:31:32 +0000 (0:00:00.327) 0:04:18.406 ****** 2025-02-04 09:41:25.403690 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.403696 | orchestrator | 2025-02-04 09:41:25.403702 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-04 09:41:25.403708 | orchestrator | Tuesday 04 February 2025 09:31:33 +0000 (0:00:00.320) 0:04:18.727 ****** 2025-02-04 09:41:25.403713 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.403719 | orchestrator | 2025-02-04 09:41:25.403725 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-02-04 09:41:25.403731 | orchestrator | Tuesday 04 February 2025 09:31:34 +0000 (0:00:01.327) 0:04:20.054 ****** 2025-02-04 09:41:25.403745 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.403751 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.403757 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.403763 | orchestrator | 2025-02-04 09:41:25.403768 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-04 09:41:25.403774 | orchestrator | Tuesday 04 February 2025 09:31:36 +0000 (0:00:01.685) 0:04:21.740 ****** 2025-02-04 09:41:25.403780 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.403786 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.403792 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.403798 | orchestrator | 2025-02-04 09:41:25.403804 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-04 09:41:25.403810 | orchestrator | Tuesday 04 February 2025 09:31:36 +0000 (0:00:00.681) 0:04:22.421 ****** 2025-02-04 09:41:25.403816 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.403822 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.403828 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.403834 | orchestrator | 2025-02-04 09:41:25.403840 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-04 09:41:25.403846 | orchestrator | Tuesday 04 February 2025 09:31:37 +0000 (0:00:00.890) 0:04:23.311 ****** 2025-02-04 09:41:25.403852 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.403858 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.403899 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.403912 | orchestrator | 2025-02-04 09:41:25.403918 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-04 09:41:25.403925 | orchestrator | Tuesday 04 February 2025 09:31:38 +0000 (0:00:00.767) 0:04:24.079 ****** 2025-02-04 09:41:25.403931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.403937 | orchestrator | 2025-02-04 09:41:25.403943 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-04 09:41:25.403949 | orchestrator | Tuesday 04 February 2025 09:31:39 +0000 (0:00:01.280) 0:04:25.359 ****** 2025-02-04 09:41:25.403955 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.403962 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.403968 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.403974 | orchestrator | 2025-02-04 09:41:25.403980 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-04 09:41:25.403986 | orchestrator | Tuesday 04 February 2025 09:31:40 +0000 (0:00:00.945) 0:04:26.304 ****** 2025-02-04 09:41:25.403992 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.403998 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.404003 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.404009 | orchestrator | 2025-02-04 09:41:25.404015 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-04 09:41:25.404021 | orchestrator | Tuesday 04 February 2025 09:31:42 +0000 (0:00:01.766) 0:04:28.070 ****** 2025-02-04 09:41:25.404027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.404033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.404043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.404049 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404055 | orchestrator | 2025-02-04 09:41:25.404061 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-04 09:41:25.404067 | orchestrator | Tuesday 04 February 2025 09:31:43 +0000 (0:00:01.241) 0:04:29.312 ****** 2025-02-04 09:41:25.404073 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.404079 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.404085 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.404091 | orchestrator | 2025-02-04 09:41:25.404097 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.404103 | orchestrator | Tuesday 04 February 2025 09:31:44 +0000 (0:00:00.996) 0:04:30.309 ****** 2025-02-04 09:41:25.404109 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404115 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404121 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404127 | orchestrator | 2025-02-04 09:41:25.404133 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-04 09:41:25.404139 | orchestrator | Tuesday 04 February 2025 09:31:45 +0000 (0:00:00.501) 0:04:30.811 ****** 2025-02-04 09:41:25.404145 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.404182 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.404189 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.404195 | orchestrator | 2025-02-04 09:41:25.404201 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-02-04 09:41:25.404207 | orchestrator | Tuesday 04 February 2025 09:31:46 +0000 (0:00:01.527) 0:04:32.338 ****** 2025-02-04 09:41:25.404213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.404219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.404225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.404231 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.404237 | orchestrator | 2025-02-04 09:41:25.404243 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-04 09:41:25.404249 | orchestrator | Tuesday 04 February 2025 09:31:47 +0000 (0:00:01.170) 0:04:33.509 ****** 2025-02-04 09:41:25.404255 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.404261 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.404267 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.404273 | orchestrator | 2025-02-04 09:41:25.404279 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-04 09:41:25.404289 | orchestrator | Tuesday 04 February 2025 09:31:48 +0000 (0:00:00.464) 0:04:33.974 ****** 2025-02-04 09:41:25.404295 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.404301 | orchestrator | 2025-02-04 09:41:25.404307 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-04 09:41:25.404313 | orchestrator | Tuesday 04 February 2025 09:31:49 +0000 (0:00:00.700) 0:04:34.675 ****** 2025-02-04 09:41:25.404318 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.404324 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.404329 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.404335 | orchestrator | 2025-02-04 09:41:25.404340 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-04 09:41:25.404345 | orchestrator | Tuesday 04 February 2025 09:31:49 +0000 (0:00:00.584) 0:04:35.259 ****** 2025-02-04 09:41:25.404351 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.404356 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.404362 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.404367 | orchestrator | 2025-02-04 09:41:25.404372 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-04 09:41:25.404378 | orchestrator | Tuesday 04 February 2025 09:31:51 +0000 (0:00:01.642) 0:04:36.902 ****** 2025-02-04 09:41:25.404383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.404392 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.404397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.404403 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.404408 | orchestrator | 2025-02-04 09:41:25.404414 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-04 09:41:25.404419 | orchestrator | Tuesday 04 February 2025 09:31:52 +0000 (0:00:00.844) 0:04:37.746 ****** 2025-02-04 09:41:25.404457 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.404465 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.404471 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.404476 | orchestrator | 2025-02-04 09:41:25.404482 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-02-04 09:41:25.404487 | orchestrator | Tuesday 04 February 2025 09:31:52 +0000 (0:00:00.426) 0:04:38.173 ****** 2025-02-04 09:41:25.404493 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.404498 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.404503 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.404509 | orchestrator | 2025-02-04 09:41:25.404514 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-04 09:41:25.404520 | orchestrator | Tuesday 04 February 2025 09:31:53 +0000 (0:00:00.517) 0:04:38.690 ****** 2025-02-04 09:41:25.404525 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.404531 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.404536 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.404541 | orchestrator | 2025-02-04 09:41:25.404547 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-02-04 09:41:25.404552 | orchestrator | Tuesday 04 February 2025 09:31:53 +0000 (0:00:00.400) 0:04:39.091 ****** 2025-02-04 09:41:25.404558 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.404563 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.404569 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.404574 | orchestrator | 2025-02-04 09:41:25.404579 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.404585 | orchestrator | Tuesday 04 February 2025 09:31:53 +0000 (0:00:00.365) 0:04:39.457 ****** 2025-02-04 09:41:25.404590 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.404596 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.404601 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.404606 | orchestrator | 2025-02-04 09:41:25.404612 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-02-04 09:41:25.404617 | orchestrator | 2025-02-04 09:41:25.404623 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.404628 | orchestrator | Tuesday 04 February 2025 09:31:56 +0000 (0:00:02.812) 0:04:42.270 ****** 2025-02-04 09:41:25.404633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.404639 | orchestrator | 2025-02-04 09:41:25.404644 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.404650 | orchestrator | Tuesday 04 February 2025 09:31:57 +0000 (0:00:00.724) 0:04:42.994 ****** 2025-02-04 09:41:25.404655 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.404664 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.404670 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.404675 | orchestrator | 2025-02-04 09:41:25.404681 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.404687 | orchestrator | Tuesday 04 February 2025 09:31:58 +0000 (0:00:00.824) 0:04:43.818 ****** 2025-02-04 09:41:25.404692 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404698 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404703 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404708 | orchestrator | 2025-02-04 09:41:25.404714 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.404723 | orchestrator | Tuesday 04 February 2025 09:31:58 +0000 (0:00:00.542) 0:04:44.361 ****** 2025-02-04 09:41:25.404728 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404734 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404742 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404747 | orchestrator | 2025-02-04 09:41:25.404753 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.404758 | orchestrator | Tuesday 04 February 2025 09:31:59 +0000 (0:00:00.375) 0:04:44.736 ****** 2025-02-04 09:41:25.404763 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404769 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404774 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404779 | orchestrator | 2025-02-04 09:41:25.404785 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.404790 | orchestrator | Tuesday 04 February 2025 09:31:59 +0000 (0:00:00.395) 0:04:45.131 ****** 2025-02-04 09:41:25.404796 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.404810 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.404816 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.404822 | orchestrator | 2025-02-04 09:41:25.404827 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.404838 | orchestrator | Tuesday 04 February 2025 09:32:00 +0000 (0:00:00.816) 0:04:45.948 ****** 2025-02-04 09:41:25.404844 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404850 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404855 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404861 | orchestrator | 2025-02-04 09:41:25.404866 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.404872 | orchestrator | Tuesday 04 February 2025 09:32:01 +0000 (0:00:00.798) 0:04:46.747 ****** 2025-02-04 09:41:25.404878 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404883 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404889 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404895 | orchestrator | 2025-02-04 09:41:25.404901 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.404906 | orchestrator | Tuesday 04 February 2025 09:32:01 +0000 (0:00:00.455) 0:04:47.202 ****** 2025-02-04 09:41:25.404912 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404918 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404924 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404929 | orchestrator | 2025-02-04 09:41:25.404935 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.404941 | orchestrator | Tuesday 04 February 2025 09:32:02 +0000 (0:00:00.483) 0:04:47.686 ****** 2025-02-04 09:41:25.404946 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.404952 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.404958 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.404964 | orchestrator | 2025-02-04 09:41:25.405002 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.405010 | orchestrator | Tuesday 04 February 2025 09:32:02 +0000 (0:00:00.424) 0:04:48.110 ****** 2025-02-04 09:41:25.405016 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405021 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405026 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405032 | orchestrator | 2025-02-04 09:41:25.405037 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.405043 | orchestrator | Tuesday 04 February 2025 09:32:03 +0000 (0:00:00.606) 0:04:48.716 ****** 2025-02-04 09:41:25.405048 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.405053 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.405059 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.405064 | orchestrator | 2025-02-04 09:41:25.405070 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.405075 | orchestrator | Tuesday 04 February 2025 09:32:04 +0000 (0:00:00.952) 0:04:49.668 ****** 2025-02-04 09:41:25.405084 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405090 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405095 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405101 | orchestrator | 2025-02-04 09:41:25.405106 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.405112 | orchestrator | Tuesday 04 February 2025 09:32:04 +0000 (0:00:00.429) 0:04:50.098 ****** 2025-02-04 09:41:25.405117 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.405122 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.405128 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.405133 | orchestrator | 2025-02-04 09:41:25.405139 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.405144 | orchestrator | Tuesday 04 February 2025 09:32:05 +0000 (0:00:00.487) 0:04:50.586 ****** 2025-02-04 09:41:25.405149 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405164 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405170 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405175 | orchestrator | 2025-02-04 09:41:25.405181 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.405186 | orchestrator | Tuesday 04 February 2025 09:32:05 +0000 (0:00:00.664) 0:04:51.250 ****** 2025-02-04 09:41:25.405191 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405197 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405202 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405208 | orchestrator | 2025-02-04 09:41:25.405213 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.405218 | orchestrator | Tuesday 04 February 2025 09:32:06 +0000 (0:00:00.394) 0:04:51.645 ****** 2025-02-04 09:41:25.405224 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405229 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405234 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405240 | orchestrator | 2025-02-04 09:41:25.405245 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.405251 | orchestrator | Tuesday 04 February 2025 09:32:06 +0000 (0:00:00.370) 0:04:52.016 ****** 2025-02-04 09:41:25.405256 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405261 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405267 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405272 | orchestrator | 2025-02-04 09:41:25.405277 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.405283 | orchestrator | Tuesday 04 February 2025 09:32:06 +0000 (0:00:00.332) 0:04:52.348 ****** 2025-02-04 09:41:25.405288 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405294 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405299 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405304 | orchestrator | 2025-02-04 09:41:25.405310 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.405315 | orchestrator | Tuesday 04 February 2025 09:32:07 +0000 (0:00:00.572) 0:04:52.921 ****** 2025-02-04 09:41:25.405320 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.405326 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.405331 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.405337 | orchestrator | 2025-02-04 09:41:25.405342 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.405347 | orchestrator | Tuesday 04 February 2025 09:32:07 +0000 (0:00:00.504) 0:04:53.426 ****** 2025-02-04 09:41:25.405353 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.405358 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.405366 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.405372 | orchestrator | 2025-02-04 09:41:25.405377 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.405383 | orchestrator | Tuesday 04 February 2025 09:32:08 +0000 (0:00:00.421) 0:04:53.847 ****** 2025-02-04 09:41:25.405388 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405397 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405402 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405408 | orchestrator | 2025-02-04 09:41:25.405413 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.405421 | orchestrator | Tuesday 04 February 2025 09:32:08 +0000 (0:00:00.360) 0:04:54.208 ****** 2025-02-04 09:41:25.405426 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405432 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405437 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405443 | orchestrator | 2025-02-04 09:41:25.405448 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.405453 | orchestrator | Tuesday 04 February 2025 09:32:09 +0000 (0:00:00.579) 0:04:54.788 ****** 2025-02-04 09:41:25.405459 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405464 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405469 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405475 | orchestrator | 2025-02-04 09:41:25.405480 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.405486 | orchestrator | Tuesday 04 February 2025 09:32:09 +0000 (0:00:00.355) 0:04:55.144 ****** 2025-02-04 09:41:25.405491 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405497 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405502 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405507 | orchestrator | 2025-02-04 09:41:25.405513 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.405549 | orchestrator | Tuesday 04 February 2025 09:32:09 +0000 (0:00:00.331) 0:04:55.476 ****** 2025-02-04 09:41:25.405556 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405562 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405567 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405573 | orchestrator | 2025-02-04 09:41:25.405578 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.405583 | orchestrator | Tuesday 04 February 2025 09:32:10 +0000 (0:00:00.437) 0:04:55.913 ****** 2025-02-04 09:41:25.405589 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405594 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405600 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405605 | orchestrator | 2025-02-04 09:41:25.405610 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.405616 | orchestrator | Tuesday 04 February 2025 09:32:10 +0000 (0:00:00.566) 0:04:56.479 ****** 2025-02-04 09:41:25.405621 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405626 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405632 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405637 | orchestrator | 2025-02-04 09:41:25.405642 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.405648 | orchestrator | Tuesday 04 February 2025 09:32:11 +0000 (0:00:00.386) 0:04:56.866 ****** 2025-02-04 09:41:25.405654 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405659 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405664 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405670 | orchestrator | 2025-02-04 09:41:25.405675 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.405680 | orchestrator | Tuesday 04 February 2025 09:32:11 +0000 (0:00:00.540) 0:04:57.407 ****** 2025-02-04 09:41:25.405686 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405691 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405697 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405702 | orchestrator | 2025-02-04 09:41:25.405708 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.405713 | orchestrator | Tuesday 04 February 2025 09:32:12 +0000 (0:00:00.473) 0:04:57.881 ****** 2025-02-04 09:41:25.405722 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405728 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405733 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405739 | orchestrator | 2025-02-04 09:41:25.405744 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.405749 | orchestrator | Tuesday 04 February 2025 09:32:12 +0000 (0:00:00.526) 0:04:58.407 ****** 2025-02-04 09:41:25.405755 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405760 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405766 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405771 | orchestrator | 2025-02-04 09:41:25.405776 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.405782 | orchestrator | Tuesday 04 February 2025 09:32:13 +0000 (0:00:00.408) 0:04:58.816 ****** 2025-02-04 09:41:25.405787 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405792 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405798 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405803 | orchestrator | 2025-02-04 09:41:25.405809 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.405814 | orchestrator | Tuesday 04 February 2025 09:32:13 +0000 (0:00:00.414) 0:04:59.230 ****** 2025-02-04 09:41:25.405820 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.405825 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.405830 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.405836 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.405841 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405847 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405852 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.405857 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.405863 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405868 | orchestrator | 2025-02-04 09:41:25.405874 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.405879 | orchestrator | Tuesday 04 February 2025 09:32:14 +0000 (0:00:00.454) 0:04:59.684 ****** 2025-02-04 09:41:25.405884 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-04 09:41:25.405890 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-04 09:41:25.405895 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405901 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-04 09:41:25.405906 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-04 09:41:25.405911 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405917 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-04 09:41:25.405923 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-04 09:41:25.405928 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405933 | orchestrator | 2025-02-04 09:41:25.405939 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.405944 | orchestrator | Tuesday 04 February 2025 09:32:14 +0000 (0:00:00.723) 0:05:00.408 ****** 2025-02-04 09:41:25.405949 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405955 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.405960 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.405965 | orchestrator | 2025-02-04 09:41:25.405971 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.405985 | orchestrator | Tuesday 04 February 2025 09:32:15 +0000 (0:00:00.603) 0:05:01.012 ****** 2025-02-04 09:41:25.405991 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.405996 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406002 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406008 | orchestrator | 2025-02-04 09:41:25.406061 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.406075 | orchestrator | Tuesday 04 February 2025 09:32:15 +0000 (0:00:00.489) 0:05:01.502 ****** 2025-02-04 09:41:25.406080 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406086 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406095 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406101 | orchestrator | 2025-02-04 09:41:25.406107 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.406113 | orchestrator | Tuesday 04 February 2025 09:32:16 +0000 (0:00:00.421) 0:05:01.923 ****** 2025-02-04 09:41:25.406119 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406125 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406131 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406136 | orchestrator | 2025-02-04 09:41:25.406142 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.406148 | orchestrator | Tuesday 04 February 2025 09:32:17 +0000 (0:00:00.919) 0:05:02.843 ****** 2025-02-04 09:41:25.406164 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406170 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406175 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406181 | orchestrator | 2025-02-04 09:41:25.406186 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.406192 | orchestrator | Tuesday 04 February 2025 09:32:17 +0000 (0:00:00.407) 0:05:03.251 ****** 2025-02-04 09:41:25.406197 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406203 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406208 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406213 | orchestrator | 2025-02-04 09:41:25.406222 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.406227 | orchestrator | Tuesday 04 February 2025 09:32:18 +0000 (0:00:00.547) 0:05:03.799 ****** 2025-02-04 09:41:25.406233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.406238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.406243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.406248 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406254 | orchestrator | 2025-02-04 09:41:25.406259 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.406265 | orchestrator | Tuesday 04 February 2025 09:32:19 +0000 (0:00:00.907) 0:05:04.706 ****** 2025-02-04 09:41:25.406270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.406275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.406281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.406286 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406292 | orchestrator | 2025-02-04 09:41:25.406297 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.406302 | orchestrator | Tuesday 04 February 2025 09:32:20 +0000 (0:00:01.330) 0:05:06.036 ****** 2025-02-04 09:41:25.406308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.406313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.406318 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.406324 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406329 | orchestrator | 2025-02-04 09:41:25.406335 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.406340 | orchestrator | Tuesday 04 February 2025 09:32:21 +0000 (0:00:00.941) 0:05:06.977 ****** 2025-02-04 09:41:25.406345 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406351 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406356 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406362 | orchestrator | 2025-02-04 09:41:25.406367 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.406378 | orchestrator | Tuesday 04 February 2025 09:32:22 +0000 (0:00:00.629) 0:05:07.607 ****** 2025-02-04 09:41:25.406384 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.406389 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406395 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.406400 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406406 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.406411 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406416 | orchestrator | 2025-02-04 09:41:25.406422 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.406427 | orchestrator | Tuesday 04 February 2025 09:32:22 +0000 (0:00:00.852) 0:05:08.459 ****** 2025-02-04 09:41:25.406432 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406438 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406443 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406449 | orchestrator | 2025-02-04 09:41:25.406454 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.406459 | orchestrator | Tuesday 04 February 2025 09:32:23 +0000 (0:00:00.374) 0:05:08.834 ****** 2025-02-04 09:41:25.406465 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406470 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406476 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406482 | orchestrator | 2025-02-04 09:41:25.406487 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.406492 | orchestrator | Tuesday 04 February 2025 09:32:24 +0000 (0:00:00.754) 0:05:09.588 ****** 2025-02-04 09:41:25.406498 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.406503 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406509 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.406514 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406520 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.406525 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406531 | orchestrator | 2025-02-04 09:41:25.406536 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.406575 | orchestrator | Tuesday 04 February 2025 09:32:24 +0000 (0:00:00.683) 0:05:10.271 ****** 2025-02-04 09:41:25.406583 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406588 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406594 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406600 | orchestrator | 2025-02-04 09:41:25.406605 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.406610 | orchestrator | Tuesday 04 February 2025 09:32:25 +0000 (0:00:00.379) 0:05:10.651 ****** 2025-02-04 09:41:25.406616 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.406624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.406629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.406635 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406640 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-04 09:41:25.406646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-04 09:41:25.406651 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-04 09:41:25.406656 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406662 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-04 09:41:25.406667 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-04 09:41:25.406673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-04 09:41:25.406678 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406683 | orchestrator | 2025-02-04 09:41:25.406689 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.406694 | orchestrator | Tuesday 04 February 2025 09:32:26 +0000 (0:00:01.221) 0:05:11.872 ****** 2025-02-04 09:41:25.406704 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406709 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406715 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406720 | orchestrator | 2025-02-04 09:41:25.406726 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.406731 | orchestrator | Tuesday 04 February 2025 09:32:26 +0000 (0:00:00.651) 0:05:12.524 ****** 2025-02-04 09:41:25.406736 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406742 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406747 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406753 | orchestrator | 2025-02-04 09:41:25.406758 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.406763 | orchestrator | Tuesday 04 February 2025 09:32:27 +0000 (0:00:00.991) 0:05:13.515 ****** 2025-02-04 09:41:25.406769 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406774 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406780 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406785 | orchestrator | 2025-02-04 09:41:25.406791 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.406796 | orchestrator | Tuesday 04 February 2025 09:32:28 +0000 (0:00:00.685) 0:05:14.200 ****** 2025-02-04 09:41:25.406801 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406807 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.406812 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.406818 | orchestrator | 2025-02-04 09:41:25.406823 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-02-04 09:41:25.406828 | orchestrator | Tuesday 04 February 2025 09:32:29 +0000 (0:00:01.108) 0:05:15.309 ****** 2025-02-04 09:41:25.406834 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.406839 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.406845 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.406850 | orchestrator | 2025-02-04 09:41:25.406856 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-02-04 09:41:25.406861 | orchestrator | Tuesday 04 February 2025 09:32:30 +0000 (0:00:00.554) 0:05:15.864 ****** 2025-02-04 09:41:25.406867 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.406872 | orchestrator | 2025-02-04 09:41:25.406880 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-02-04 09:41:25.406886 | orchestrator | Tuesday 04 February 2025 09:32:31 +0000 (0:00:01.118) 0:05:16.982 ****** 2025-02-04 09:41:25.406891 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.406897 | orchestrator | 2025-02-04 09:41:25.406902 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-02-04 09:41:25.406908 | orchestrator | Tuesday 04 February 2025 09:32:31 +0000 (0:00:00.244) 0:05:17.227 ****** 2025-02-04 09:41:25.406913 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-02-04 09:41:25.406918 | orchestrator | 2025-02-04 09:41:25.406924 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-02-04 09:41:25.406929 | orchestrator | Tuesday 04 February 2025 09:32:32 +0000 (0:00:01.036) 0:05:18.263 ****** 2025-02-04 09:41:25.406935 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.406940 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.406949 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.406954 | orchestrator | 2025-02-04 09:41:25.406960 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-02-04 09:41:25.406966 | orchestrator | Tuesday 04 February 2025 09:32:33 +0000 (0:00:00.573) 0:05:18.837 ****** 2025-02-04 09:41:25.406971 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.406977 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.406982 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.406987 | orchestrator | 2025-02-04 09:41:25.407000 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-02-04 09:41:25.407011 | orchestrator | Tuesday 04 February 2025 09:32:33 +0000 (0:00:00.441) 0:05:19.278 ****** 2025-02-04 09:41:25.407016 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407022 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407027 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407032 | orchestrator | 2025-02-04 09:41:25.407038 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-02-04 09:41:25.407043 | orchestrator | Tuesday 04 February 2025 09:32:35 +0000 (0:00:01.899) 0:05:21.178 ****** 2025-02-04 09:41:25.407049 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407069 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407076 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407081 | orchestrator | 2025-02-04 09:41:25.407087 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-02-04 09:41:25.407092 | orchestrator | Tuesday 04 February 2025 09:32:36 +0000 (0:00:01.049) 0:05:22.228 ****** 2025-02-04 09:41:25.407098 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407103 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407109 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407114 | orchestrator | 2025-02-04 09:41:25.407119 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-02-04 09:41:25.407125 | orchestrator | Tuesday 04 February 2025 09:32:37 +0000 (0:00:00.943) 0:05:23.172 ****** 2025-02-04 09:41:25.407130 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407136 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.407141 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.407146 | orchestrator | 2025-02-04 09:41:25.407183 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-02-04 09:41:25.407189 | orchestrator | Tuesday 04 February 2025 09:32:38 +0000 (0:00:00.854) 0:05:24.027 ****** 2025-02-04 09:41:25.407195 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407200 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.407205 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.407211 | orchestrator | 2025-02-04 09:41:25.407216 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-02-04 09:41:25.407223 | orchestrator | Tuesday 04 February 2025 09:32:39 +0000 (0:00:00.714) 0:05:24.742 ****** 2025-02-04 09:41:25.407229 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407235 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.407241 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.407247 | orchestrator | 2025-02-04 09:41:25.407253 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-02-04 09:41:25.407259 | orchestrator | Tuesday 04 February 2025 09:32:39 +0000 (0:00:00.475) 0:05:25.217 ****** 2025-02-04 09:41:25.407265 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407271 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.407277 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.407283 | orchestrator | 2025-02-04 09:41:25.407289 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-02-04 09:41:25.407295 | orchestrator | Tuesday 04 February 2025 09:32:40 +0000 (0:00:00.404) 0:05:25.622 ****** 2025-02-04 09:41:25.407301 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407307 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.407313 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.407319 | orchestrator | 2025-02-04 09:41:25.407325 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-02-04 09:41:25.407331 | orchestrator | Tuesday 04 February 2025 09:32:40 +0000 (0:00:00.562) 0:05:26.184 ****** 2025-02-04 09:41:25.407337 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407343 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407349 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407355 | orchestrator | 2025-02-04 09:41:25.407361 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-02-04 09:41:25.407367 | orchestrator | Tuesday 04 February 2025 09:32:42 +0000 (0:00:01.700) 0:05:27.885 ****** 2025-02-04 09:41:25.407380 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407386 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.407392 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.407398 | orchestrator | 2025-02-04 09:41:25.407404 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-02-04 09:41:25.407410 | orchestrator | Tuesday 04 February 2025 09:32:42 +0000 (0:00:00.391) 0:05:28.277 ****** 2025-02-04 09:41:25.407416 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.407422 | orchestrator | 2025-02-04 09:41:25.407428 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-02-04 09:41:25.407437 | orchestrator | Tuesday 04 February 2025 09:32:43 +0000 (0:00:00.717) 0:05:28.994 ****** 2025-02-04 09:41:25.407443 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407449 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.407455 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.407461 | orchestrator | 2025-02-04 09:41:25.407467 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-02-04 09:41:25.407473 | orchestrator | Tuesday 04 February 2025 09:32:43 +0000 (0:00:00.316) 0:05:29.311 ****** 2025-02-04 09:41:25.407478 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407484 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.407490 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.407496 | orchestrator | 2025-02-04 09:41:25.407502 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-02-04 09:41:25.407507 | orchestrator | Tuesday 04 February 2025 09:32:44 +0000 (0:00:00.311) 0:05:29.622 ****** 2025-02-04 09:41:25.407516 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.407522 | orchestrator | 2025-02-04 09:41:25.407527 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-02-04 09:41:25.407533 | orchestrator | Tuesday 04 February 2025 09:32:44 +0000 (0:00:00.705) 0:05:30.328 ****** 2025-02-04 09:41:25.407539 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407544 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407550 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407555 | orchestrator | 2025-02-04 09:41:25.407561 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-02-04 09:41:25.407566 | orchestrator | Tuesday 04 February 2025 09:32:45 +0000 (0:00:01.191) 0:05:31.519 ****** 2025-02-04 09:41:25.407571 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407575 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407580 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407585 | orchestrator | 2025-02-04 09:41:25.407590 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-02-04 09:41:25.407595 | orchestrator | Tuesday 04 February 2025 09:32:47 +0000 (0:00:01.334) 0:05:32.854 ****** 2025-02-04 09:41:25.407600 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407620 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407626 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407631 | orchestrator | 2025-02-04 09:41:25.407636 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-02-04 09:41:25.407641 | orchestrator | Tuesday 04 February 2025 09:32:49 +0000 (0:00:02.349) 0:05:35.204 ****** 2025-02-04 09:41:25.407646 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407654 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407660 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407664 | orchestrator | 2025-02-04 09:41:25.407669 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-02-04 09:41:25.407674 | orchestrator | Tuesday 04 February 2025 09:32:51 +0000 (0:00:02.290) 0:05:37.494 ****** 2025-02-04 09:41:25.407679 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.407688 | orchestrator | 2025-02-04 09:41:25.407693 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-02-04 09:41:25.407698 | orchestrator | Tuesday 04 February 2025 09:32:52 +0000 (0:00:00.704) 0:05:38.199 ****** 2025-02-04 09:41:25.407703 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-02-04 09:41:25.407708 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407713 | orchestrator | 2025-02-04 09:41:25.407718 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-02-04 09:41:25.407723 | orchestrator | Tuesday 04 February 2025 09:33:14 +0000 (0:00:21.605) 0:05:59.805 ****** 2025-02-04 09:41:25.407728 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407732 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.407737 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.407742 | orchestrator | 2025-02-04 09:41:25.407747 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-02-04 09:41:25.407752 | orchestrator | Tuesday 04 February 2025 09:33:21 +0000 (0:00:07.644) 0:06:07.450 ****** 2025-02-04 09:41:25.407757 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407762 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.407767 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.407771 | orchestrator | 2025-02-04 09:41:25.407776 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-04 09:41:25.407781 | orchestrator | Tuesday 04 February 2025 09:33:23 +0000 (0:00:01.518) 0:06:08.969 ****** 2025-02-04 09:41:25.407786 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407791 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407796 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407801 | orchestrator | 2025-02-04 09:41:25.407806 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-02-04 09:41:25.407811 | orchestrator | Tuesday 04 February 2025 09:33:24 +0000 (0:00:00.810) 0:06:09.779 ****** 2025-02-04 09:41:25.407816 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.407821 | orchestrator | 2025-02-04 09:41:25.407826 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-02-04 09:41:25.407830 | orchestrator | Tuesday 04 February 2025 09:33:25 +0000 (0:00:01.001) 0:06:10.780 ****** 2025-02-04 09:41:25.407835 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407840 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.407845 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.407850 | orchestrator | 2025-02-04 09:41:25.407855 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-02-04 09:41:25.407860 | orchestrator | Tuesday 04 February 2025 09:33:25 +0000 (0:00:00.511) 0:06:11.292 ****** 2025-02-04 09:41:25.407865 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407870 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407874 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407879 | orchestrator | 2025-02-04 09:41:25.407884 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-02-04 09:41:25.407889 | orchestrator | Tuesday 04 February 2025 09:33:27 +0000 (0:00:01.646) 0:06:12.939 ****** 2025-02-04 09:41:25.407894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.407899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.407904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.407909 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.407914 | orchestrator | 2025-02-04 09:41:25.407919 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-02-04 09:41:25.407926 | orchestrator | Tuesday 04 February 2025 09:33:28 +0000 (0:00:01.402) 0:06:14.342 ****** 2025-02-04 09:41:25.407931 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.407936 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.407941 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.407949 | orchestrator | 2025-02-04 09:41:25.407954 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.407959 | orchestrator | Tuesday 04 February 2025 09:33:29 +0000 (0:00:00.567) 0:06:14.909 ****** 2025-02-04 09:41:25.407964 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.407969 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.407974 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.407979 | orchestrator | 2025-02-04 09:41:25.407984 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-02-04 09:41:25.407989 | orchestrator | 2025-02-04 09:41:25.407993 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.407999 | orchestrator | Tuesday 04 February 2025 09:33:32 +0000 (0:00:03.430) 0:06:18.340 ****** 2025-02-04 09:41:25.408003 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.408008 | orchestrator | 2025-02-04 09:41:25.408013 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.408018 | orchestrator | Tuesday 04 February 2025 09:33:33 +0000 (0:00:00.841) 0:06:19.181 ****** 2025-02-04 09:41:25.408023 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.408028 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.408046 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.408052 | orchestrator | 2025-02-04 09:41:25.408057 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.408062 | orchestrator | Tuesday 04 February 2025 09:33:34 +0000 (0:00:00.772) 0:06:19.954 ****** 2025-02-04 09:41:25.408067 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408072 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408077 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408082 | orchestrator | 2025-02-04 09:41:25.408087 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.408092 | orchestrator | Tuesday 04 February 2025 09:33:34 +0000 (0:00:00.403) 0:06:20.358 ****** 2025-02-04 09:41:25.408097 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408102 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408107 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408112 | orchestrator | 2025-02-04 09:41:25.408117 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.408122 | orchestrator | Tuesday 04 February 2025 09:33:35 +0000 (0:00:00.538) 0:06:20.896 ****** 2025-02-04 09:41:25.408127 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408132 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408137 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408142 | orchestrator | 2025-02-04 09:41:25.408147 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.408163 | orchestrator | Tuesday 04 February 2025 09:33:35 +0000 (0:00:00.370) 0:06:21.266 ****** 2025-02-04 09:41:25.408168 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.408173 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.408178 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.408183 | orchestrator | 2025-02-04 09:41:25.408189 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.408193 | orchestrator | Tuesday 04 February 2025 09:33:36 +0000 (0:00:00.749) 0:06:22.016 ****** 2025-02-04 09:41:25.408198 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408203 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408208 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408213 | orchestrator | 2025-02-04 09:41:25.408218 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.408223 | orchestrator | Tuesday 04 February 2025 09:33:36 +0000 (0:00:00.359) 0:06:22.376 ****** 2025-02-04 09:41:25.408228 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408233 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408238 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408246 | orchestrator | 2025-02-04 09:41:25.408252 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.408257 | orchestrator | Tuesday 04 February 2025 09:33:37 +0000 (0:00:00.526) 0:06:22.902 ****** 2025-02-04 09:41:25.408262 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408267 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408272 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408277 | orchestrator | 2025-02-04 09:41:25.408282 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.408287 | orchestrator | Tuesday 04 February 2025 09:33:37 +0000 (0:00:00.346) 0:06:23.248 ****** 2025-02-04 09:41:25.408292 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408296 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408304 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408309 | orchestrator | 2025-02-04 09:41:25.408314 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.408319 | orchestrator | Tuesday 04 February 2025 09:33:38 +0000 (0:00:00.313) 0:06:23.562 ****** 2025-02-04 09:41:25.408324 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408329 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408334 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408339 | orchestrator | 2025-02-04 09:41:25.408344 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.408349 | orchestrator | Tuesday 04 February 2025 09:33:38 +0000 (0:00:00.331) 0:06:23.893 ****** 2025-02-04 09:41:25.408354 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.408359 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.408364 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.408369 | orchestrator | 2025-02-04 09:41:25.408374 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.408379 | orchestrator | Tuesday 04 February 2025 09:33:39 +0000 (0:00:01.132) 0:06:25.026 ****** 2025-02-04 09:41:25.408384 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408389 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408394 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408398 | orchestrator | 2025-02-04 09:41:25.408403 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.408408 | orchestrator | Tuesday 04 February 2025 09:33:39 +0000 (0:00:00.523) 0:06:25.549 ****** 2025-02-04 09:41:25.408413 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.408418 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.408423 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.408428 | orchestrator | 2025-02-04 09:41:25.408435 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.408440 | orchestrator | Tuesday 04 February 2025 09:33:40 +0000 (0:00:00.583) 0:06:26.133 ****** 2025-02-04 09:41:25.408445 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408450 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408455 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408460 | orchestrator | 2025-02-04 09:41:25.408465 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.408470 | orchestrator | Tuesday 04 February 2025 09:33:40 +0000 (0:00:00.403) 0:06:26.537 ****** 2025-02-04 09:41:25.408475 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408480 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408485 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408490 | orchestrator | 2025-02-04 09:41:25.408495 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.408500 | orchestrator | Tuesday 04 February 2025 09:33:41 +0000 (0:00:00.746) 0:06:27.283 ****** 2025-02-04 09:41:25.408505 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408509 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408528 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408534 | orchestrator | 2025-02-04 09:41:25.408539 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.408548 | orchestrator | Tuesday 04 February 2025 09:33:42 +0000 (0:00:00.498) 0:06:27.781 ****** 2025-02-04 09:41:25.408553 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408557 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408562 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408567 | orchestrator | 2025-02-04 09:41:25.408572 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.408577 | orchestrator | Tuesday 04 February 2025 09:33:42 +0000 (0:00:00.500) 0:06:28.282 ****** 2025-02-04 09:41:25.408582 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408587 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408592 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408597 | orchestrator | 2025-02-04 09:41:25.408602 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.408607 | orchestrator | Tuesday 04 February 2025 09:33:43 +0000 (0:00:00.461) 0:06:28.744 ****** 2025-02-04 09:41:25.408612 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.408616 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.408621 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.408626 | orchestrator | 2025-02-04 09:41:25.408631 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.408636 | orchestrator | Tuesday 04 February 2025 09:33:43 +0000 (0:00:00.739) 0:06:29.483 ****** 2025-02-04 09:41:25.408641 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.408646 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.408651 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.408656 | orchestrator | 2025-02-04 09:41:25.408661 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.408666 | orchestrator | Tuesday 04 February 2025 09:33:44 +0000 (0:00:00.442) 0:06:29.926 ****** 2025-02-04 09:41:25.408670 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408675 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408680 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408685 | orchestrator | 2025-02-04 09:41:25.408690 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.408695 | orchestrator | Tuesday 04 February 2025 09:33:44 +0000 (0:00:00.401) 0:06:30.328 ****** 2025-02-04 09:41:25.408700 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408705 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408710 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408715 | orchestrator | 2025-02-04 09:41:25.408720 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.408725 | orchestrator | Tuesday 04 February 2025 09:33:45 +0000 (0:00:00.745) 0:06:31.073 ****** 2025-02-04 09:41:25.408729 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408734 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408739 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408744 | orchestrator | 2025-02-04 09:41:25.408749 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.408754 | orchestrator | Tuesday 04 February 2025 09:33:45 +0000 (0:00:00.398) 0:06:31.472 ****** 2025-02-04 09:41:25.408759 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408764 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408769 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408774 | orchestrator | 2025-02-04 09:41:25.408779 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.408784 | orchestrator | Tuesday 04 February 2025 09:33:46 +0000 (0:00:00.394) 0:06:31.866 ****** 2025-02-04 09:41:25.408789 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408794 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408799 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408804 | orchestrator | 2025-02-04 09:41:25.408809 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.408817 | orchestrator | Tuesday 04 February 2025 09:33:46 +0000 (0:00:00.397) 0:06:32.263 ****** 2025-02-04 09:41:25.408822 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408827 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408834 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408839 | orchestrator | 2025-02-04 09:41:25.408844 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.408849 | orchestrator | Tuesday 04 February 2025 09:33:47 +0000 (0:00:00.780) 0:06:33.044 ****** 2025-02-04 09:41:25.408854 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408859 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408864 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408869 | orchestrator | 2025-02-04 09:41:25.408874 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.408879 | orchestrator | Tuesday 04 February 2025 09:33:47 +0000 (0:00:00.425) 0:06:33.470 ****** 2025-02-04 09:41:25.408884 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408889 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408894 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408899 | orchestrator | 2025-02-04 09:41:25.408904 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.408909 | orchestrator | Tuesday 04 February 2025 09:33:48 +0000 (0:00:00.423) 0:06:33.893 ****** 2025-02-04 09:41:25.408914 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408919 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408924 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408929 | orchestrator | 2025-02-04 09:41:25.408934 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.408939 | orchestrator | Tuesday 04 February 2025 09:33:48 +0000 (0:00:00.467) 0:06:34.360 ****** 2025-02-04 09:41:25.408944 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.408948 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.408994 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.408999 | orchestrator | 2025-02-04 09:41:25.409004 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.409025 | orchestrator | Tuesday 04 February 2025 09:33:49 +0000 (0:00:00.667) 0:06:35.028 ****** 2025-02-04 09:41:25.409030 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409036 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409040 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409045 | orchestrator | 2025-02-04 09:41:25.409054 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.409059 | orchestrator | Tuesday 04 February 2025 09:33:49 +0000 (0:00:00.438) 0:06:35.467 ****** 2025-02-04 09:41:25.409064 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409069 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409074 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409079 | orchestrator | 2025-02-04 09:41:25.409083 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.409088 | orchestrator | Tuesday 04 February 2025 09:33:50 +0000 (0:00:00.486) 0:06:35.953 ****** 2025-02-04 09:41:25.409093 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.409098 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.409103 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409108 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.409113 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.409118 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409123 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.409128 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.409133 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409138 | orchestrator | 2025-02-04 09:41:25.409147 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.409170 | orchestrator | Tuesday 04 February 2025 09:33:50 +0000 (0:00:00.424) 0:06:36.378 ****** 2025-02-04 09:41:25.409176 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-04 09:41:25.409180 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-04 09:41:25.409185 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409190 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-04 09:41:25.409195 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-04 09:41:25.409200 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409205 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-04 09:41:25.409210 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-04 09:41:25.409215 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409220 | orchestrator | 2025-02-04 09:41:25.409225 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.409230 | orchestrator | Tuesday 04 February 2025 09:33:51 +0000 (0:00:00.868) 0:06:37.246 ****** 2025-02-04 09:41:25.409235 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409239 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409244 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409249 | orchestrator | 2025-02-04 09:41:25.409254 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.409259 | orchestrator | Tuesday 04 February 2025 09:33:52 +0000 (0:00:00.613) 0:06:37.860 ****** 2025-02-04 09:41:25.409264 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409269 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409274 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409279 | orchestrator | 2025-02-04 09:41:25.409284 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.409289 | orchestrator | Tuesday 04 February 2025 09:33:52 +0000 (0:00:00.501) 0:06:38.362 ****** 2025-02-04 09:41:25.409294 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409299 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409304 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409309 | orchestrator | 2025-02-04 09:41:25.409314 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.409319 | orchestrator | Tuesday 04 February 2025 09:33:53 +0000 (0:00:00.454) 0:06:38.816 ****** 2025-02-04 09:41:25.409324 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409329 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409334 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409339 | orchestrator | 2025-02-04 09:41:25.409343 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.409348 | orchestrator | Tuesday 04 February 2025 09:33:53 +0000 (0:00:00.714) 0:06:39.531 ****** 2025-02-04 09:41:25.409353 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409358 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409363 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409368 | orchestrator | 2025-02-04 09:41:25.409373 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.409378 | orchestrator | Tuesday 04 February 2025 09:33:54 +0000 (0:00:00.471) 0:06:40.003 ****** 2025-02-04 09:41:25.409383 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409388 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409392 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409397 | orchestrator | 2025-02-04 09:41:25.409402 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.409407 | orchestrator | Tuesday 04 February 2025 09:33:54 +0000 (0:00:00.457) 0:06:40.460 ****** 2025-02-04 09:41:25.409412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.409422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.409427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.409432 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409437 | orchestrator | 2025-02-04 09:41:25.409442 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.409446 | orchestrator | Tuesday 04 February 2025 09:33:55 +0000 (0:00:00.531) 0:06:40.991 ****** 2025-02-04 09:41:25.409451 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.409456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.409479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.409486 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409491 | orchestrator | 2025-02-04 09:41:25.409496 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.409501 | orchestrator | Tuesday 04 February 2025 09:33:55 +0000 (0:00:00.550) 0:06:41.542 ****** 2025-02-04 09:41:25.409506 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.409511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.409515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.409520 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409525 | orchestrator | 2025-02-04 09:41:25.409530 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.409535 | orchestrator | Tuesday 04 February 2025 09:33:56 +0000 (0:00:00.907) 0:06:42.450 ****** 2025-02-04 09:41:25.409540 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409545 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409550 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409555 | orchestrator | 2025-02-04 09:41:25.409559 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.409564 | orchestrator | Tuesday 04 February 2025 09:33:57 +0000 (0:00:00.923) 0:06:43.373 ****** 2025-02-04 09:41:25.409569 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.409574 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409579 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.409584 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409589 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.409594 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409602 | orchestrator | 2025-02-04 09:41:25.409607 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.409612 | orchestrator | Tuesday 04 February 2025 09:33:58 +0000 (0:00:00.834) 0:06:44.207 ****** 2025-02-04 09:41:25.409617 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409622 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409627 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409632 | orchestrator | 2025-02-04 09:41:25.409637 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.409642 | orchestrator | Tuesday 04 February 2025 09:33:59 +0000 (0:00:00.508) 0:06:44.715 ****** 2025-02-04 09:41:25.409647 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409652 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409657 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409662 | orchestrator | 2025-02-04 09:41:25.409669 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.409674 | orchestrator | Tuesday 04 February 2025 09:33:59 +0000 (0:00:00.542) 0:06:45.258 ****** 2025-02-04 09:41:25.409679 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.409684 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409688 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.409693 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409698 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.409707 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409712 | orchestrator | 2025-02-04 09:41:25.409717 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.409722 | orchestrator | Tuesday 04 February 2025 09:34:00 +0000 (0:00:00.936) 0:06:46.194 ****** 2025-02-04 09:41:25.409727 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409732 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409737 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409742 | orchestrator | 2025-02-04 09:41:25.409746 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.409751 | orchestrator | Tuesday 04 February 2025 09:34:01 +0000 (0:00:00.395) 0:06:46.589 ****** 2025-02-04 09:41:25.409756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.409761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.409766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.409771 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409776 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-04 09:41:25.409781 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-04 09:41:25.409786 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-04 09:41:25.409790 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-04 09:41:25.409800 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-04 09:41:25.409805 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-04 09:41:25.409810 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409815 | orchestrator | 2025-02-04 09:41:25.409820 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.409825 | orchestrator | Tuesday 04 February 2025 09:34:01 +0000 (0:00:00.912) 0:06:47.502 ****** 2025-02-04 09:41:25.409829 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409834 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409839 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409844 | orchestrator | 2025-02-04 09:41:25.409849 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.409854 | orchestrator | Tuesday 04 February 2025 09:34:02 +0000 (0:00:00.655) 0:06:48.158 ****** 2025-02-04 09:41:25.409859 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409864 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409869 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409874 | orchestrator | 2025-02-04 09:41:25.409878 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.409883 | orchestrator | Tuesday 04 February 2025 09:34:03 +0000 (0:00:00.907) 0:06:49.066 ****** 2025-02-04 09:41:25.409888 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409893 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409911 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409917 | orchestrator | 2025-02-04 09:41:25.409922 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.409927 | orchestrator | Tuesday 04 February 2025 09:34:04 +0000 (0:00:00.631) 0:06:49.697 ****** 2025-02-04 09:41:25.409932 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.409937 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.409942 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.409947 | orchestrator | 2025-02-04 09:41:25.409952 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-02-04 09:41:25.409957 | orchestrator | Tuesday 04 February 2025 09:34:04 +0000 (0:00:00.769) 0:06:50.467 ****** 2025-02-04 09:41:25.409962 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:41:25.409967 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:41:25.409976 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:41:25.409981 | orchestrator | 2025-02-04 09:41:25.409986 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-02-04 09:41:25.409991 | orchestrator | Tuesday 04 February 2025 09:34:05 +0000 (0:00:00.819) 0:06:51.286 ****** 2025-02-04 09:41:25.409996 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.410001 | orchestrator | 2025-02-04 09:41:25.410006 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-02-04 09:41:25.410011 | orchestrator | Tuesday 04 February 2025 09:34:06 +0000 (0:00:00.552) 0:06:51.839 ****** 2025-02-04 09:41:25.410032 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410037 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410042 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410047 | orchestrator | 2025-02-04 09:41:25.410052 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-02-04 09:41:25.410057 | orchestrator | Tuesday 04 February 2025 09:34:07 +0000 (0:00:00.736) 0:06:52.575 ****** 2025-02-04 09:41:25.410062 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410067 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.410072 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.410077 | orchestrator | 2025-02-04 09:41:25.410082 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-02-04 09:41:25.410087 | orchestrator | Tuesday 04 February 2025 09:34:07 +0000 (0:00:00.513) 0:06:53.089 ****** 2025-02-04 09:41:25.410092 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-04 09:41:25.410099 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-04 09:41:25.410104 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-04 09:41:25.410109 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-02-04 09:41:25.410114 | orchestrator | 2025-02-04 09:41:25.410119 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-02-04 09:41:25.410124 | orchestrator | Tuesday 04 February 2025 09:34:14 +0000 (0:00:07.295) 0:07:00.384 ****** 2025-02-04 09:41:25.410129 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.410134 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.410139 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.410144 | orchestrator | 2025-02-04 09:41:25.410148 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-02-04 09:41:25.410181 | orchestrator | Tuesday 04 February 2025 09:34:15 +0000 (0:00:00.550) 0:07:00.935 ****** 2025-02-04 09:41:25.410187 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-04 09:41:25.410192 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-04 09:41:25.410197 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-04 09:41:25.410202 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-02-04 09:41:25.410209 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:41:25.410214 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:41:25.410219 | orchestrator | 2025-02-04 09:41:25.410227 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-02-04 09:41:25.410232 | orchestrator | Tuesday 04 February 2025 09:34:17 +0000 (0:00:01.741) 0:07:02.677 ****** 2025-02-04 09:41:25.410237 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-02-04 09:41:25.410241 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-02-04 09:41:25.410246 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-02-04 09:41:25.410251 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-02-04 09:41:25.410256 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-02-04 09:41:25.410261 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-02-04 09:41:25.410266 | orchestrator | 2025-02-04 09:41:25.410271 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-02-04 09:41:25.410279 | orchestrator | Tuesday 04 February 2025 09:34:18 +0000 (0:00:01.181) 0:07:03.858 ****** 2025-02-04 09:41:25.410284 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.410289 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.410294 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.410299 | orchestrator | 2025-02-04 09:41:25.410304 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-02-04 09:41:25.410309 | orchestrator | Tuesday 04 February 2025 09:34:19 +0000 (0:00:00.832) 0:07:04.691 ****** 2025-02-04 09:41:25.410314 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410319 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.410324 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.410329 | orchestrator | 2025-02-04 09:41:25.410333 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-02-04 09:41:25.410338 | orchestrator | Tuesday 04 February 2025 09:34:19 +0000 (0:00:00.311) 0:07:05.003 ****** 2025-02-04 09:41:25.410343 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410348 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.410353 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.410358 | orchestrator | 2025-02-04 09:41:25.410363 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-02-04 09:41:25.410384 | orchestrator | Tuesday 04 February 2025 09:34:19 +0000 (0:00:00.309) 0:07:05.313 ****** 2025-02-04 09:41:25.410390 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.410395 | orchestrator | 2025-02-04 09:41:25.410400 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-02-04 09:41:25.410405 | orchestrator | Tuesday 04 February 2025 09:34:20 +0000 (0:00:00.896) 0:07:06.209 ****** 2025-02-04 09:41:25.410410 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410415 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.410420 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.410425 | orchestrator | 2025-02-04 09:41:25.410430 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-02-04 09:41:25.410435 | orchestrator | Tuesday 04 February 2025 09:34:21 +0000 (0:00:00.503) 0:07:06.712 ****** 2025-02-04 09:41:25.410440 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410448 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.410453 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.410458 | orchestrator | 2025-02-04 09:41:25.410463 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-02-04 09:41:25.410468 | orchestrator | Tuesday 04 February 2025 09:34:21 +0000 (0:00:00.460) 0:07:07.173 ****** 2025-02-04 09:41:25.410473 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.410478 | orchestrator | 2025-02-04 09:41:25.410483 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-02-04 09:41:25.410487 | orchestrator | Tuesday 04 February 2025 09:34:22 +0000 (0:00:00.983) 0:07:08.157 ****** 2025-02-04 09:41:25.410492 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410497 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410502 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410507 | orchestrator | 2025-02-04 09:41:25.410512 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-02-04 09:41:25.410517 | orchestrator | Tuesday 04 February 2025 09:34:24 +0000 (0:00:01.480) 0:07:09.637 ****** 2025-02-04 09:41:25.410522 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410527 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410532 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410537 | orchestrator | 2025-02-04 09:41:25.410542 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-02-04 09:41:25.410547 | orchestrator | Tuesday 04 February 2025 09:34:25 +0000 (0:00:01.294) 0:07:10.931 ****** 2025-02-04 09:41:25.410551 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410560 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410565 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410570 | orchestrator | 2025-02-04 09:41:25.410574 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-02-04 09:41:25.410579 | orchestrator | Tuesday 04 February 2025 09:34:27 +0000 (0:00:02.369) 0:07:13.301 ****** 2025-02-04 09:41:25.410584 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410589 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410594 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410599 | orchestrator | 2025-02-04 09:41:25.410604 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-02-04 09:41:25.410609 | orchestrator | Tuesday 04 February 2025 09:34:29 +0000 (0:00:02.202) 0:07:15.504 ****** 2025-02-04 09:41:25.410617 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410622 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.410627 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-02-04 09:41:25.410632 | orchestrator | 2025-02-04 09:41:25.410637 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-02-04 09:41:25.410642 | orchestrator | Tuesday 04 February 2025 09:34:30 +0000 (0:00:00.808) 0:07:16.312 ****** 2025-02-04 09:41:25.410646 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-02-04 09:41:25.410654 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-02-04 09:41:25.410659 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (28 retries left). 2025-02-04 09:41:25.410664 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.410669 | orchestrator | 2025-02-04 09:41:25.410674 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-02-04 09:41:25.410679 | orchestrator | Tuesday 04 February 2025 09:34:50 +0000 (0:00:19.838) 0:07:36.150 ****** 2025-02-04 09:41:25.410684 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.410689 | orchestrator | 2025-02-04 09:41:25.410694 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-02-04 09:41:25.410701 | orchestrator | Tuesday 04 February 2025 09:34:52 +0000 (0:00:01.973) 0:07:38.124 ****** 2025-02-04 09:41:25.410707 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.410712 | orchestrator | 2025-02-04 09:41:25.410717 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-02-04 09:41:25.410722 | orchestrator | Tuesday 04 February 2025 09:34:53 +0000 (0:00:00.604) 0:07:38.729 ****** 2025-02-04 09:41:25.410727 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.410732 | orchestrator | 2025-02-04 09:41:25.410736 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-02-04 09:41:25.410741 | orchestrator | Tuesday 04 February 2025 09:34:53 +0000 (0:00:00.574) 0:07:39.303 ****** 2025-02-04 09:41:25.410746 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-02-04 09:41:25.410751 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-02-04 09:41:25.410756 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-02-04 09:41:25.410761 | orchestrator | 2025-02-04 09:41:25.410779 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-02-04 09:41:25.410785 | orchestrator | Tuesday 04 February 2025 09:35:00 +0000 (0:00:06.971) 0:07:46.275 ****** 2025-02-04 09:41:25.410790 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-02-04 09:41:25.410795 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-02-04 09:41:25.410800 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-02-04 09:41:25.410805 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-02-04 09:41:25.410816 | orchestrator | 2025-02-04 09:41:25.410821 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-04 09:41:25.410826 | orchestrator | Tuesday 04 February 2025 09:35:06 +0000 (0:00:05.401) 0:07:51.677 ****** 2025-02-04 09:41:25.410831 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410836 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410841 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410846 | orchestrator | 2025-02-04 09:41:25.410851 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-02-04 09:41:25.410855 | orchestrator | Tuesday 04 February 2025 09:35:06 +0000 (0:00:00.779) 0:07:52.456 ****** 2025-02-04 09:41:25.410860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.410865 | orchestrator | 2025-02-04 09:41:25.410870 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-02-04 09:41:25.410875 | orchestrator | Tuesday 04 February 2025 09:35:08 +0000 (0:00:01.507) 0:07:53.964 ****** 2025-02-04 09:41:25.410880 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.410885 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.410890 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.410895 | orchestrator | 2025-02-04 09:41:25.410899 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-02-04 09:41:25.410904 | orchestrator | Tuesday 04 February 2025 09:35:08 +0000 (0:00:00.526) 0:07:54.491 ****** 2025-02-04 09:41:25.410909 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.410914 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.410919 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.410924 | orchestrator | 2025-02-04 09:41:25.410929 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-02-04 09:41:25.410934 | orchestrator | Tuesday 04 February 2025 09:35:10 +0000 (0:00:01.768) 0:07:56.259 ****** 2025-02-04 09:41:25.410939 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:41:25.410944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:41:25.410948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:41:25.410953 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.410958 | orchestrator | 2025-02-04 09:41:25.410963 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-02-04 09:41:25.410968 | orchestrator | Tuesday 04 February 2025 09:35:11 +0000 (0:00:00.951) 0:07:57.211 ****** 2025-02-04 09:41:25.410973 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.410978 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.410983 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.410988 | orchestrator | 2025-02-04 09:41:25.410992 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.410997 | orchestrator | Tuesday 04 February 2025 09:35:12 +0000 (0:00:00.510) 0:07:57.721 ****** 2025-02-04 09:41:25.411002 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.411007 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.411012 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.411017 | orchestrator | 2025-02-04 09:41:25.411022 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-02-04 09:41:25.411027 | orchestrator | 2025-02-04 09:41:25.411032 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.411037 | orchestrator | Tuesday 04 February 2025 09:35:14 +0000 (0:00:02.618) 0:08:00.340 ****** 2025-02-04 09:41:25.411042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.411046 | orchestrator | 2025-02-04 09:41:25.411051 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.411056 | orchestrator | Tuesday 04 February 2025 09:35:15 +0000 (0:00:01.039) 0:08:01.379 ****** 2025-02-04 09:41:25.411061 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411066 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411074 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411079 | orchestrator | 2025-02-04 09:41:25.411084 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.411118 | orchestrator | Tuesday 04 February 2025 09:35:16 +0000 (0:00:00.412) 0:08:01.792 ****** 2025-02-04 09:41:25.411136 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411141 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411149 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411165 | orchestrator | 2025-02-04 09:41:25.411171 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.411176 | orchestrator | Tuesday 04 February 2025 09:35:17 +0000 (0:00:01.145) 0:08:02.938 ****** 2025-02-04 09:41:25.411181 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411186 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411191 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411196 | orchestrator | 2025-02-04 09:41:25.411201 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.411206 | orchestrator | Tuesday 04 February 2025 09:35:18 +0000 (0:00:00.947) 0:08:03.885 ****** 2025-02-04 09:41:25.411211 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411216 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411220 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411225 | orchestrator | 2025-02-04 09:41:25.411230 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.411235 | orchestrator | Tuesday 04 February 2025 09:35:19 +0000 (0:00:00.849) 0:08:04.735 ****** 2025-02-04 09:41:25.411240 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411260 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411265 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411270 | orchestrator | 2025-02-04 09:41:25.411275 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.411280 | orchestrator | Tuesday 04 February 2025 09:35:19 +0000 (0:00:00.390) 0:08:05.126 ****** 2025-02-04 09:41:25.411285 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411290 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411298 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411303 | orchestrator | 2025-02-04 09:41:25.411308 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.411313 | orchestrator | Tuesday 04 February 2025 09:35:20 +0000 (0:00:00.743) 0:08:05.869 ****** 2025-02-04 09:41:25.411318 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411323 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411328 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411333 | orchestrator | 2025-02-04 09:41:25.411338 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.411343 | orchestrator | Tuesday 04 February 2025 09:35:20 +0000 (0:00:00.461) 0:08:06.330 ****** 2025-02-04 09:41:25.411348 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411352 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411357 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411362 | orchestrator | 2025-02-04 09:41:25.411367 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.411372 | orchestrator | Tuesday 04 February 2025 09:35:21 +0000 (0:00:00.432) 0:08:06.763 ****** 2025-02-04 09:41:25.411377 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411382 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411387 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411392 | orchestrator | 2025-02-04 09:41:25.411397 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.411402 | orchestrator | Tuesday 04 February 2025 09:35:21 +0000 (0:00:00.465) 0:08:07.228 ****** 2025-02-04 09:41:25.411407 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411412 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411417 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411426 | orchestrator | 2025-02-04 09:41:25.411431 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.411436 | orchestrator | Tuesday 04 February 2025 09:35:22 +0000 (0:00:00.777) 0:08:08.005 ****** 2025-02-04 09:41:25.411441 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411446 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411451 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411456 | orchestrator | 2025-02-04 09:41:25.411461 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.411466 | orchestrator | Tuesday 04 February 2025 09:35:23 +0000 (0:00:00.831) 0:08:08.837 ****** 2025-02-04 09:41:25.411471 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411476 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411481 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411486 | orchestrator | 2025-02-04 09:41:25.411491 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.411496 | orchestrator | Tuesday 04 February 2025 09:35:23 +0000 (0:00:00.484) 0:08:09.322 ****** 2025-02-04 09:41:25.411501 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411505 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411510 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411515 | orchestrator | 2025-02-04 09:41:25.411520 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.411525 | orchestrator | Tuesday 04 February 2025 09:35:24 +0000 (0:00:00.378) 0:08:09.701 ****** 2025-02-04 09:41:25.411530 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411535 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411540 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411545 | orchestrator | 2025-02-04 09:41:25.411550 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.411555 | orchestrator | Tuesday 04 February 2025 09:35:24 +0000 (0:00:00.704) 0:08:10.405 ****** 2025-02-04 09:41:25.411560 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411565 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411570 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411575 | orchestrator | 2025-02-04 09:41:25.411580 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.411585 | orchestrator | Tuesday 04 February 2025 09:35:25 +0000 (0:00:00.414) 0:08:10.820 ****** 2025-02-04 09:41:25.411590 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411594 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411599 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411604 | orchestrator | 2025-02-04 09:41:25.411609 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.411614 | orchestrator | Tuesday 04 February 2025 09:35:25 +0000 (0:00:00.504) 0:08:11.324 ****** 2025-02-04 09:41:25.411619 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411624 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411629 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411634 | orchestrator | 2025-02-04 09:41:25.411642 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.411647 | orchestrator | Tuesday 04 February 2025 09:35:26 +0000 (0:00:00.401) 0:08:11.726 ****** 2025-02-04 09:41:25.411652 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411657 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411662 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411667 | orchestrator | 2025-02-04 09:41:25.411672 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.411677 | orchestrator | Tuesday 04 February 2025 09:35:26 +0000 (0:00:00.807) 0:08:12.533 ****** 2025-02-04 09:41:25.411682 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411687 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411692 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411713 | orchestrator | 2025-02-04 09:41:25.411718 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.411727 | orchestrator | Tuesday 04 February 2025 09:35:27 +0000 (0:00:00.399) 0:08:12.933 ****** 2025-02-04 09:41:25.411732 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.411737 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.411755 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.411760 | orchestrator | 2025-02-04 09:41:25.411766 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.411771 | orchestrator | Tuesday 04 February 2025 09:35:27 +0000 (0:00:00.404) 0:08:13.338 ****** 2025-02-04 09:41:25.411775 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411780 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411785 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411790 | orchestrator | 2025-02-04 09:41:25.411795 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.411800 | orchestrator | Tuesday 04 February 2025 09:35:28 +0000 (0:00:00.359) 0:08:13.698 ****** 2025-02-04 09:41:25.411805 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411810 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411815 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411820 | orchestrator | 2025-02-04 09:41:25.411825 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.411830 | orchestrator | Tuesday 04 February 2025 09:35:28 +0000 (0:00:00.702) 0:08:14.400 ****** 2025-02-04 09:41:25.411835 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411840 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411847 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411853 | orchestrator | 2025-02-04 09:41:25.411857 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.411862 | orchestrator | Tuesday 04 February 2025 09:35:29 +0000 (0:00:00.416) 0:08:14.816 ****** 2025-02-04 09:41:25.411867 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411872 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411877 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411882 | orchestrator | 2025-02-04 09:41:25.411887 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.411892 | orchestrator | Tuesday 04 February 2025 09:35:29 +0000 (0:00:00.411) 0:08:15.228 ****** 2025-02-04 09:41:25.411897 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411902 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411906 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411911 | orchestrator | 2025-02-04 09:41:25.411916 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.411921 | orchestrator | Tuesday 04 February 2025 09:35:30 +0000 (0:00:00.341) 0:08:15.570 ****** 2025-02-04 09:41:25.411926 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411931 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411936 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411941 | orchestrator | 2025-02-04 09:41:25.411946 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.411951 | orchestrator | Tuesday 04 February 2025 09:35:30 +0000 (0:00:00.492) 0:08:16.062 ****** 2025-02-04 09:41:25.411956 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411960 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411965 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.411970 | orchestrator | 2025-02-04 09:41:25.411975 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.411983 | orchestrator | Tuesday 04 February 2025 09:35:30 +0000 (0:00:00.320) 0:08:16.382 ****** 2025-02-04 09:41:25.411988 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.411993 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.411998 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412002 | orchestrator | 2025-02-04 09:41:25.412007 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.412016 | orchestrator | Tuesday 04 February 2025 09:35:31 +0000 (0:00:00.329) 0:08:16.712 ****** 2025-02-04 09:41:25.412021 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412026 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412031 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412036 | orchestrator | 2025-02-04 09:41:25.412041 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.412046 | orchestrator | Tuesday 04 February 2025 09:35:31 +0000 (0:00:00.311) 0:08:17.023 ****** 2025-02-04 09:41:25.412051 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412056 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412061 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412065 | orchestrator | 2025-02-04 09:41:25.412070 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.412075 | orchestrator | Tuesday 04 February 2025 09:35:32 +0000 (0:00:00.671) 0:08:17.695 ****** 2025-02-04 09:41:25.412080 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412085 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412090 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412095 | orchestrator | 2025-02-04 09:41:25.412100 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.412105 | orchestrator | Tuesday 04 February 2025 09:35:32 +0000 (0:00:00.394) 0:08:18.089 ****** 2025-02-04 09:41:25.412110 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412114 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412119 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412124 | orchestrator | 2025-02-04 09:41:25.412129 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.412134 | orchestrator | Tuesday 04 February 2025 09:35:32 +0000 (0:00:00.354) 0:08:18.443 ****** 2025-02-04 09:41:25.412139 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.412144 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.412149 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412187 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.412193 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.412198 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412203 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.412208 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.412213 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412217 | orchestrator | 2025-02-04 09:41:25.412236 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.412242 | orchestrator | Tuesday 04 February 2025 09:35:33 +0000 (0:00:00.426) 0:08:18.870 ****** 2025-02-04 09:41:25.412247 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-04 09:41:25.412252 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-04 09:41:25.412257 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412262 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-04 09:41:25.412267 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-04 09:41:25.412272 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412277 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-04 09:41:25.412282 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-04 09:41:25.412287 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412292 | orchestrator | 2025-02-04 09:41:25.412297 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.412302 | orchestrator | Tuesday 04 February 2025 09:35:34 +0000 (0:00:00.715) 0:08:19.586 ****** 2025-02-04 09:41:25.412307 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412312 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412321 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412326 | orchestrator | 2025-02-04 09:41:25.412334 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.412339 | orchestrator | Tuesday 04 February 2025 09:35:34 +0000 (0:00:00.440) 0:08:20.026 ****** 2025-02-04 09:41:25.412344 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412349 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412354 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412359 | orchestrator | 2025-02-04 09:41:25.412364 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.412369 | orchestrator | Tuesday 04 February 2025 09:35:34 +0000 (0:00:00.364) 0:08:20.391 ****** 2025-02-04 09:41:25.412374 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412379 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412384 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412389 | orchestrator | 2025-02-04 09:41:25.412394 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.412399 | orchestrator | Tuesday 04 February 2025 09:35:35 +0000 (0:00:00.423) 0:08:20.814 ****** 2025-02-04 09:41:25.412404 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412409 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412414 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412419 | orchestrator | 2025-02-04 09:41:25.412424 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.412429 | orchestrator | Tuesday 04 February 2025 09:35:36 +0000 (0:00:00.794) 0:08:21.609 ****** 2025-02-04 09:41:25.412434 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412439 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412444 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412448 | orchestrator | 2025-02-04 09:41:25.412454 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.412459 | orchestrator | Tuesday 04 February 2025 09:35:36 +0000 (0:00:00.522) 0:08:22.131 ****** 2025-02-04 09:41:25.412464 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412468 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412473 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412482 | orchestrator | 2025-02-04 09:41:25.412487 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.412492 | orchestrator | Tuesday 04 February 2025 09:35:37 +0000 (0:00:00.531) 0:08:22.662 ****** 2025-02-04 09:41:25.412498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.412503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.412508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.412513 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412518 | orchestrator | 2025-02-04 09:41:25.412523 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.412528 | orchestrator | Tuesday 04 February 2025 09:35:37 +0000 (0:00:00.540) 0:08:23.203 ****** 2025-02-04 09:41:25.412533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.412539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.412544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.412549 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412554 | orchestrator | 2025-02-04 09:41:25.412559 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.412564 | orchestrator | Tuesday 04 February 2025 09:35:38 +0000 (0:00:00.715) 0:08:23.918 ****** 2025-02-04 09:41:25.412569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.412574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.412579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.412588 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412593 | orchestrator | 2025-02-04 09:41:25.412598 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.412603 | orchestrator | Tuesday 04 February 2025 09:35:39 +0000 (0:00:01.029) 0:08:24.948 ****** 2025-02-04 09:41:25.412608 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412613 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412618 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412623 | orchestrator | 2025-02-04 09:41:25.412628 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.412633 | orchestrator | Tuesday 04 February 2025 09:35:40 +0000 (0:00:00.769) 0:08:25.718 ****** 2025-02-04 09:41:25.412638 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.412643 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412648 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.412666 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412672 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.412677 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412682 | orchestrator | 2025-02-04 09:41:25.412687 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.412692 | orchestrator | Tuesday 04 February 2025 09:35:40 +0000 (0:00:00.628) 0:08:26.346 ****** 2025-02-04 09:41:25.412697 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412701 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412706 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412711 | orchestrator | 2025-02-04 09:41:25.412716 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.412721 | orchestrator | Tuesday 04 February 2025 09:35:41 +0000 (0:00:00.368) 0:08:26.714 ****** 2025-02-04 09:41:25.412726 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412731 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412736 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412741 | orchestrator | 2025-02-04 09:41:25.412746 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.412751 | orchestrator | Tuesday 04 February 2025 09:35:41 +0000 (0:00:00.352) 0:08:27.067 ****** 2025-02-04 09:41:25.412756 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.412761 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412766 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.412771 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412776 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.412781 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412786 | orchestrator | 2025-02-04 09:41:25.412791 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.412796 | orchestrator | Tuesday 04 February 2025 09:35:42 +0000 (0:00:00.774) 0:08:27.841 ****** 2025-02-04 09:41:25.412800 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.412805 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412810 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.412815 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412820 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.412825 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412830 | orchestrator | 2025-02-04 09:41:25.412835 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.412840 | orchestrator | Tuesday 04 February 2025 09:35:42 +0000 (0:00:00.354) 0:08:28.195 ****** 2025-02-04 09:41:25.412845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.412853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.412858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.412863 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.412868 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.412873 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.412878 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412883 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412888 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.412893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.412898 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.412903 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412908 | orchestrator | 2025-02-04 09:41:25.412913 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.412918 | orchestrator | Tuesday 04 February 2025 09:35:43 +0000 (0:00:00.730) 0:08:28.925 ****** 2025-02-04 09:41:25.412923 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412928 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412933 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412937 | orchestrator | 2025-02-04 09:41:25.412942 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.412952 | orchestrator | Tuesday 04 February 2025 09:35:44 +0000 (0:00:00.763) 0:08:29.689 ****** 2025-02-04 09:41:25.412957 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.412962 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.412967 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.412972 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.412977 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.412982 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.412987 | orchestrator | 2025-02-04 09:41:25.412992 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.412996 | orchestrator | Tuesday 04 February 2025 09:35:44 +0000 (0:00:00.534) 0:08:30.223 ****** 2025-02-04 09:41:25.413001 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413006 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413011 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413016 | orchestrator | 2025-02-04 09:41:25.413021 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.413026 | orchestrator | Tuesday 04 February 2025 09:35:45 +0000 (0:00:00.960) 0:08:31.184 ****** 2025-02-04 09:41:25.413031 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413036 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413041 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413046 | orchestrator | 2025-02-04 09:41:25.413051 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-02-04 09:41:25.413056 | orchestrator | Tuesday 04 February 2025 09:35:46 +0000 (0:00:00.797) 0:08:31.981 ****** 2025-02-04 09:41:25.413073 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.413079 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.413084 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.413089 | orchestrator | 2025-02-04 09:41:25.413094 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-02-04 09:41:25.413099 | orchestrator | Tuesday 04 February 2025 09:35:47 +0000 (0:00:00.724) 0:08:32.706 ****** 2025-02-04 09:41:25.413104 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:41:25.413110 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:41:25.413114 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:41:25.413119 | orchestrator | 2025-02-04 09:41:25.413128 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-02-04 09:41:25.413133 | orchestrator | Tuesday 04 February 2025 09:35:48 +0000 (0:00:01.052) 0:08:33.758 ****** 2025-02-04 09:41:25.413138 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.413143 | orchestrator | 2025-02-04 09:41:25.413148 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-02-04 09:41:25.413164 | orchestrator | Tuesday 04 February 2025 09:35:48 +0000 (0:00:00.785) 0:08:34.543 ****** 2025-02-04 09:41:25.413169 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413174 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413179 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413184 | orchestrator | 2025-02-04 09:41:25.413189 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-02-04 09:41:25.413194 | orchestrator | Tuesday 04 February 2025 09:35:49 +0000 (0:00:00.755) 0:08:35.299 ****** 2025-02-04 09:41:25.413199 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413203 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413209 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413213 | orchestrator | 2025-02-04 09:41:25.413218 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-02-04 09:41:25.413223 | orchestrator | Tuesday 04 February 2025 09:35:50 +0000 (0:00:00.422) 0:08:35.722 ****** 2025-02-04 09:41:25.413228 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413233 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413238 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413243 | orchestrator | 2025-02-04 09:41:25.413248 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-02-04 09:41:25.413253 | orchestrator | Tuesday 04 February 2025 09:35:50 +0000 (0:00:00.415) 0:08:36.137 ****** 2025-02-04 09:41:25.413258 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413263 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413270 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413275 | orchestrator | 2025-02-04 09:41:25.413281 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-02-04 09:41:25.413285 | orchestrator | Tuesday 04 February 2025 09:35:50 +0000 (0:00:00.304) 0:08:36.442 ****** 2025-02-04 09:41:25.413290 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.413295 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.413300 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.413305 | orchestrator | 2025-02-04 09:41:25.413310 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-02-04 09:41:25.413315 | orchestrator | Tuesday 04 February 2025 09:35:51 +0000 (0:00:00.831) 0:08:37.273 ****** 2025-02-04 09:41:25.413320 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.413325 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.413330 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.413335 | orchestrator | 2025-02-04 09:41:25.413340 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-02-04 09:41:25.413345 | orchestrator | Tuesday 04 February 2025 09:35:52 +0000 (0:00:00.380) 0:08:37.653 ****** 2025-02-04 09:41:25.413350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-04 09:41:25.413355 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-04 09:41:25.413360 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-04 09:41:25.413365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-04 09:41:25.413370 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-04 09:41:25.413375 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-04 09:41:25.413380 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-02-04 09:41:25.413389 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-04 09:41:25.413396 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-04 09:41:25.413401 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-04 09:41:25.413406 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-02-04 09:41:25.413411 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-04 09:41:25.413416 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-02-04 09:41:25.413421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-02-04 09:41:25.413426 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-02-04 09:41:25.413433 | orchestrator | 2025-02-04 09:41:25.413439 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-02-04 09:41:25.413457 | orchestrator | Tuesday 04 February 2025 09:35:57 +0000 (0:00:05.096) 0:08:42.750 ****** 2025-02-04 09:41:25.413463 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413468 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413473 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413478 | orchestrator | 2025-02-04 09:41:25.413483 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-02-04 09:41:25.413488 | orchestrator | Tuesday 04 February 2025 09:35:57 +0000 (0:00:00.474) 0:08:43.224 ****** 2025-02-04 09:41:25.413493 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.413498 | orchestrator | 2025-02-04 09:41:25.413503 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-02-04 09:41:25.413507 | orchestrator | Tuesday 04 February 2025 09:35:58 +0000 (0:00:00.517) 0:08:43.742 ****** 2025-02-04 09:41:25.413512 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-04 09:41:25.413517 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-04 09:41:25.413522 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-02-04 09:41:25.413527 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-02-04 09:41:25.413532 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-02-04 09:41:25.413537 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-02-04 09:41:25.413542 | orchestrator | 2025-02-04 09:41:25.413547 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-02-04 09:41:25.413551 | orchestrator | Tuesday 04 February 2025 09:35:59 +0000 (0:00:00.841) 0:08:44.584 ****** 2025-02-04 09:41:25.413556 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:41:25.413561 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.413566 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-04 09:41:25.413571 | orchestrator | 2025-02-04 09:41:25.413576 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-02-04 09:41:25.413581 | orchestrator | Tuesday 04 February 2025 09:36:00 +0000 (0:00:01.893) 0:08:46.477 ****** 2025-02-04 09:41:25.413586 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-04 09:41:25.413591 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.413596 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.413600 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-04 09:41:25.413605 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.413610 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.413615 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-04 09:41:25.413620 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.413628 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.413633 | orchestrator | 2025-02-04 09:41:25.413638 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-02-04 09:41:25.413643 | orchestrator | Tuesday 04 February 2025 09:36:02 +0000 (0:00:01.176) 0:08:47.653 ****** 2025-02-04 09:41:25.413648 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.413653 | orchestrator | 2025-02-04 09:41:25.413658 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-02-04 09:41:25.413663 | orchestrator | Tuesday 04 February 2025 09:36:04 +0000 (0:00:02.156) 0:08:49.809 ****** 2025-02-04 09:41:25.413668 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.413673 | orchestrator | 2025-02-04 09:41:25.413677 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-02-04 09:41:25.413682 | orchestrator | Tuesday 04 February 2025 09:36:05 +0000 (0:00:00.863) 0:08:50.673 ****** 2025-02-04 09:41:25.413687 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413692 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413697 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413702 | orchestrator | 2025-02-04 09:41:25.413707 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-02-04 09:41:25.413712 | orchestrator | Tuesday 04 February 2025 09:36:05 +0000 (0:00:00.456) 0:08:51.129 ****** 2025-02-04 09:41:25.413717 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413722 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413726 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413731 | orchestrator | 2025-02-04 09:41:25.413736 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-02-04 09:41:25.413741 | orchestrator | Tuesday 04 February 2025 09:36:05 +0000 (0:00:00.359) 0:08:51.489 ****** 2025-02-04 09:41:25.413746 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413751 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413756 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413760 | orchestrator | 2025-02-04 09:41:25.413765 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-02-04 09:41:25.413770 | orchestrator | Tuesday 04 February 2025 09:36:06 +0000 (0:00:00.353) 0:08:51.842 ****** 2025-02-04 09:41:25.413775 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.413780 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.413785 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.413790 | orchestrator | 2025-02-04 09:41:25.413795 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-02-04 09:41:25.413800 | orchestrator | Tuesday 04 February 2025 09:36:07 +0000 (0:00:00.747) 0:08:52.590 ****** 2025-02-04 09:41:25.413805 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.413809 | orchestrator | 2025-02-04 09:41:25.413814 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-02-04 09:41:25.413831 | orchestrator | Tuesday 04 February 2025 09:36:07 +0000 (0:00:00.679) 0:08:53.270 ****** 2025-02-04 09:41:25.413837 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8b56b489-397c-55c4-ba6f-4e97fbbc410a', 'data_vg': 'ceph-8b56b489-397c-55c4-ba6f-4e97fbbc410a'}) 2025-02-04 09:41:25.413844 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a9a0f878-ef24-53af-8bd4-10a12036221e', 'data_vg': 'ceph-a9a0f878-ef24-53af-8bd4-10a12036221e'}) 2025-02-04 09:41:25.413849 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39', 'data_vg': 'ceph-25e96ed1-6b8f-57c8-bdd9-51fb1c446a39'}) 2025-02-04 09:41:25.413855 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fd89a215-a86e-5b79-8dd1-0773a21fefe5', 'data_vg': 'ceph-fd89a215-a86e-5b79-8dd1-0773a21fefe5'}) 2025-02-04 09:41:25.413866 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-89dbb78a-6e2f-596a-9aad-74f54f8525ce', 'data_vg': 'ceph-89dbb78a-6e2f-596a-9aad-74f54f8525ce'}) 2025-02-04 09:41:25.413871 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-857e455f-002b-509a-b66d-9c4a1025daeb', 'data_vg': 'ceph-857e455f-002b-509a-b66d-9c4a1025daeb'}) 2025-02-04 09:41:25.413876 | orchestrator | 2025-02-04 09:41:25.413881 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-02-04 09:41:25.413886 | orchestrator | Tuesday 04 February 2025 09:36:43 +0000 (0:00:35.342) 0:09:28.613 ****** 2025-02-04 09:41:25.413891 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.413896 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.413901 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.413906 | orchestrator | 2025-02-04 09:41:25.413911 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-02-04 09:41:25.413918 | orchestrator | Tuesday 04 February 2025 09:36:43 +0000 (0:00:00.539) 0:09:29.152 ****** 2025-02-04 09:41:25.413923 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.413928 | orchestrator | 2025-02-04 09:41:25.413933 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-02-04 09:41:25.413938 | orchestrator | Tuesday 04 February 2025 09:36:44 +0000 (0:00:00.597) 0:09:29.750 ****** 2025-02-04 09:41:25.413943 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.413948 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.413953 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.413958 | orchestrator | 2025-02-04 09:41:25.413963 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-02-04 09:41:25.413968 | orchestrator | Tuesday 04 February 2025 09:36:44 +0000 (0:00:00.737) 0:09:30.488 ****** 2025-02-04 09:41:25.413973 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.413978 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.413983 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.413988 | orchestrator | 2025-02-04 09:41:25.413993 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-02-04 09:41:25.413998 | orchestrator | Tuesday 04 February 2025 09:36:46 +0000 (0:00:01.843) 0:09:32.332 ****** 2025-02-04 09:41:25.414003 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.414008 | orchestrator | 2025-02-04 09:41:25.414040 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-02-04 09:41:25.414047 | orchestrator | Tuesday 04 February 2025 09:36:47 +0000 (0:00:00.632) 0:09:32.964 ****** 2025-02-04 09:41:25.414052 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.414057 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.414062 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.414067 | orchestrator | 2025-02-04 09:41:25.414072 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-02-04 09:41:25.414077 | orchestrator | Tuesday 04 February 2025 09:36:48 +0000 (0:00:01.456) 0:09:34.420 ****** 2025-02-04 09:41:25.414081 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.414086 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.414091 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.414096 | orchestrator | 2025-02-04 09:41:25.414101 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-02-04 09:41:25.414106 | orchestrator | Tuesday 04 February 2025 09:36:50 +0000 (0:00:01.217) 0:09:35.638 ****** 2025-02-04 09:41:25.414111 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.414116 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.414121 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.414126 | orchestrator | 2025-02-04 09:41:25.414131 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-02-04 09:41:25.414136 | orchestrator | Tuesday 04 February 2025 09:36:52 +0000 (0:00:01.985) 0:09:37.624 ****** 2025-02-04 09:41:25.414146 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414161 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414167 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414171 | orchestrator | 2025-02-04 09:41:25.414176 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-02-04 09:41:25.414182 | orchestrator | Tuesday 04 February 2025 09:36:52 +0000 (0:00:00.527) 0:09:38.151 ****** 2025-02-04 09:41:25.414186 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414191 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414196 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414201 | orchestrator | 2025-02-04 09:41:25.414206 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-02-04 09:41:25.414211 | orchestrator | Tuesday 04 February 2025 09:36:53 +0000 (0:00:00.841) 0:09:38.993 ****** 2025-02-04 09:41:25.414216 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-04 09:41:25.414221 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-02-04 09:41:25.414229 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-02-04 09:41:25.414248 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-02-04 09:41:25.414254 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-02-04 09:41:25.414259 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-02-04 09:41:25.414264 | orchestrator | 2025-02-04 09:41:25.414269 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-02-04 09:41:25.414274 | orchestrator | Tuesday 04 February 2025 09:36:54 +0000 (0:00:01.198) 0:09:40.192 ****** 2025-02-04 09:41:25.414279 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-02-04 09:41:25.414284 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-02-04 09:41:25.414289 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-02-04 09:41:25.414294 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-02-04 09:41:25.414302 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-02-04 09:41:25.414307 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-02-04 09:41:25.414312 | orchestrator | 2025-02-04 09:41:25.414317 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-02-04 09:41:25.414322 | orchestrator | Tuesday 04 February 2025 09:36:57 +0000 (0:00:03.352) 0:09:43.545 ****** 2025-02-04 09:41:25.414327 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414332 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.414341 | orchestrator | 2025-02-04 09:41:25.414347 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-02-04 09:41:25.414351 | orchestrator | Tuesday 04 February 2025 09:37:00 +0000 (0:00:02.525) 0:09:46.071 ****** 2025-02-04 09:41:25.414356 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414362 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414367 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-02-04 09:41:25.414372 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.414377 | orchestrator | 2025-02-04 09:41:25.414382 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-02-04 09:41:25.414386 | orchestrator | Tuesday 04 February 2025 09:37:12 +0000 (0:00:12.191) 0:09:58.262 ****** 2025-02-04 09:41:25.414391 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414396 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414401 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414406 | orchestrator | 2025-02-04 09:41:25.414411 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-02-04 09:41:25.414416 | orchestrator | Tuesday 04 February 2025 09:37:13 +0000 (0:00:00.568) 0:09:58.831 ****** 2025-02-04 09:41:25.414421 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414426 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414431 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414441 | orchestrator | 2025-02-04 09:41:25.414446 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-04 09:41:25.414451 | orchestrator | Tuesday 04 February 2025 09:37:14 +0000 (0:00:01.144) 0:09:59.975 ****** 2025-02-04 09:41:25.414455 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.414460 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.414466 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.414471 | orchestrator | 2025-02-04 09:41:25.414475 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-02-04 09:41:25.414483 | orchestrator | Tuesday 04 February 2025 09:37:15 +0000 (0:00:00.707) 0:10:00.683 ****** 2025-02-04 09:41:25.414488 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.414493 | orchestrator | 2025-02-04 09:41:25.414498 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-02-04 09:41:25.414503 | orchestrator | Tuesday 04 February 2025 09:37:15 +0000 (0:00:00.804) 0:10:01.487 ****** 2025-02-04 09:41:25.414508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.414518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.414523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.414528 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414533 | orchestrator | 2025-02-04 09:41:25.414538 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-02-04 09:41:25.414543 | orchestrator | Tuesday 04 February 2025 09:37:16 +0000 (0:00:00.542) 0:10:02.029 ****** 2025-02-04 09:41:25.414548 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414553 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414558 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414563 | orchestrator | 2025-02-04 09:41:25.414568 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-02-04 09:41:25.414573 | orchestrator | Tuesday 04 February 2025 09:37:16 +0000 (0:00:00.385) 0:10:02.415 ****** 2025-02-04 09:41:25.414578 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414583 | orchestrator | 2025-02-04 09:41:25.414588 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-02-04 09:41:25.414592 | orchestrator | Tuesday 04 February 2025 09:37:17 +0000 (0:00:00.295) 0:10:02.710 ****** 2025-02-04 09:41:25.414597 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414602 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414607 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414612 | orchestrator | 2025-02-04 09:41:25.414617 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-02-04 09:41:25.414622 | orchestrator | Tuesday 04 February 2025 09:37:17 +0000 (0:00:00.549) 0:10:03.260 ****** 2025-02-04 09:41:25.414627 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414632 | orchestrator | 2025-02-04 09:41:25.414637 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-02-04 09:41:25.414642 | orchestrator | Tuesday 04 February 2025 09:37:17 +0000 (0:00:00.273) 0:10:03.533 ****** 2025-02-04 09:41:25.414647 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414652 | orchestrator | 2025-02-04 09:41:25.414657 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-02-04 09:41:25.414674 | orchestrator | Tuesday 04 February 2025 09:37:18 +0000 (0:00:00.266) 0:10:03.800 ****** 2025-02-04 09:41:25.414680 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414685 | orchestrator | 2025-02-04 09:41:25.414690 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-02-04 09:41:25.414695 | orchestrator | Tuesday 04 February 2025 09:37:18 +0000 (0:00:00.126) 0:10:03.927 ****** 2025-02-04 09:41:25.414700 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414705 | orchestrator | 2025-02-04 09:41:25.414710 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-02-04 09:41:25.414720 | orchestrator | Tuesday 04 February 2025 09:37:18 +0000 (0:00:00.256) 0:10:04.184 ****** 2025-02-04 09:41:25.414725 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414730 | orchestrator | 2025-02-04 09:41:25.414735 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-02-04 09:41:25.414740 | orchestrator | Tuesday 04 February 2025 09:37:18 +0000 (0:00:00.315) 0:10:04.499 ****** 2025-02-04 09:41:25.414745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.414750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.414755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.414759 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414765 | orchestrator | 2025-02-04 09:41:25.414770 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-02-04 09:41:25.414775 | orchestrator | Tuesday 04 February 2025 09:37:19 +0000 (0:00:00.423) 0:10:04.922 ****** 2025-02-04 09:41:25.414779 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414784 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414789 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414794 | orchestrator | 2025-02-04 09:41:25.414799 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-02-04 09:41:25.414804 | orchestrator | Tuesday 04 February 2025 09:37:19 +0000 (0:00:00.357) 0:10:05.280 ****** 2025-02-04 09:41:25.414809 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414814 | orchestrator | 2025-02-04 09:41:25.414819 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-02-04 09:41:25.414824 | orchestrator | Tuesday 04 February 2025 09:37:20 +0000 (0:00:00.501) 0:10:05.782 ****** 2025-02-04 09:41:25.414829 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414834 | orchestrator | 2025-02-04 09:41:25.414839 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.414844 | orchestrator | Tuesday 04 February 2025 09:37:20 +0000 (0:00:00.290) 0:10:06.072 ****** 2025-02-04 09:41:25.414849 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.414854 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.414859 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.414863 | orchestrator | 2025-02-04 09:41:25.414868 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-02-04 09:41:25.414873 | orchestrator | 2025-02-04 09:41:25.414878 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.414883 | orchestrator | Tuesday 04 February 2025 09:37:23 +0000 (0:00:03.403) 0:10:09.475 ****** 2025-02-04 09:41:25.414888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.414894 | orchestrator | 2025-02-04 09:41:25.414901 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.414906 | orchestrator | Tuesday 04 February 2025 09:37:25 +0000 (0:00:01.624) 0:10:11.099 ****** 2025-02-04 09:41:25.414911 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.414916 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.414921 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.414926 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.414931 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.414936 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.414941 | orchestrator | 2025-02-04 09:41:25.414946 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.414951 | orchestrator | Tuesday 04 February 2025 09:37:26 +0000 (0:00:00.959) 0:10:12.059 ****** 2025-02-04 09:41:25.414956 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.414961 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.414966 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.414970 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.414975 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.414984 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.414991 | orchestrator | 2025-02-04 09:41:25.414996 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.415002 | orchestrator | Tuesday 04 February 2025 09:37:27 +0000 (0:00:00.974) 0:10:13.034 ****** 2025-02-04 09:41:25.415006 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415011 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415016 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415021 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415026 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415031 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415036 | orchestrator | 2025-02-04 09:41:25.415041 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.415046 | orchestrator | Tuesday 04 February 2025 09:37:28 +0000 (0:00:00.720) 0:10:13.754 ****** 2025-02-04 09:41:25.415051 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415056 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415061 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415066 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415071 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415075 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415080 | orchestrator | 2025-02-04 09:41:25.415085 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.415090 | orchestrator | Tuesday 04 February 2025 09:37:29 +0000 (0:00:00.978) 0:10:14.733 ****** 2025-02-04 09:41:25.415095 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415100 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415105 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415110 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.415115 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.415132 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.415137 | orchestrator | 2025-02-04 09:41:25.415142 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.415147 | orchestrator | Tuesday 04 February 2025 09:37:30 +0000 (0:00:00.953) 0:10:15.687 ****** 2025-02-04 09:41:25.415163 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415168 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415174 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415179 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415184 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415188 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415194 | orchestrator | 2025-02-04 09:41:25.415199 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.415203 | orchestrator | Tuesday 04 February 2025 09:37:31 +0000 (0:00:01.182) 0:10:16.869 ****** 2025-02-04 09:41:25.415208 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415213 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415218 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415223 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415228 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415233 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415238 | orchestrator | 2025-02-04 09:41:25.415243 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.415248 | orchestrator | Tuesday 04 February 2025 09:37:32 +0000 (0:00:01.067) 0:10:17.937 ****** 2025-02-04 09:41:25.415253 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415258 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415263 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415268 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415273 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415278 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415283 | orchestrator | 2025-02-04 09:41:25.415288 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.415293 | orchestrator | Tuesday 04 February 2025 09:37:33 +0000 (0:00:01.267) 0:10:19.204 ****** 2025-02-04 09:41:25.415302 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415307 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415312 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415317 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415322 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415326 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415331 | orchestrator | 2025-02-04 09:41:25.415336 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.415341 | orchestrator | Tuesday 04 February 2025 09:37:34 +0000 (0:00:00.724) 0:10:19.928 ****** 2025-02-04 09:41:25.415346 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415351 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415356 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415361 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415366 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415371 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415376 | orchestrator | 2025-02-04 09:41:25.415381 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.415386 | orchestrator | Tuesday 04 February 2025 09:37:35 +0000 (0:00:01.080) 0:10:21.009 ****** 2025-02-04 09:41:25.415391 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415396 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415401 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415406 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.415414 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.415419 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.415424 | orchestrator | 2025-02-04 09:41:25.415429 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.415434 | orchestrator | Tuesday 04 February 2025 09:37:36 +0000 (0:00:01.234) 0:10:22.244 ****** 2025-02-04 09:41:25.415439 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415444 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415449 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415454 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415459 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415464 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415469 | orchestrator | 2025-02-04 09:41:25.415474 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.415479 | orchestrator | Tuesday 04 February 2025 09:37:37 +0000 (0:00:01.044) 0:10:23.288 ****** 2025-02-04 09:41:25.415484 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415489 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415493 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415498 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.415503 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.415508 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.415513 | orchestrator | 2025-02-04 09:41:25.415518 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.415523 | orchestrator | Tuesday 04 February 2025 09:37:38 +0000 (0:00:00.718) 0:10:24.007 ****** 2025-02-04 09:41:25.415528 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415533 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415538 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415543 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415548 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415553 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415558 | orchestrator | 2025-02-04 09:41:25.415563 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.415568 | orchestrator | Tuesday 04 February 2025 09:37:39 +0000 (0:00:01.090) 0:10:25.097 ****** 2025-02-04 09:41:25.415573 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415578 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415583 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415591 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415596 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415601 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415606 | orchestrator | 2025-02-04 09:41:25.415611 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.415616 | orchestrator | Tuesday 04 February 2025 09:37:40 +0000 (0:00:00.711) 0:10:25.809 ****** 2025-02-04 09:41:25.415621 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415626 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415631 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415651 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415657 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415662 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415667 | orchestrator | 2025-02-04 09:41:25.415672 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.415677 | orchestrator | Tuesday 04 February 2025 09:37:41 +0000 (0:00:01.183) 0:10:26.992 ****** 2025-02-04 09:41:25.415682 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415687 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415692 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415697 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415702 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415706 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415711 | orchestrator | 2025-02-04 09:41:25.415716 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.415721 | orchestrator | Tuesday 04 February 2025 09:37:42 +0000 (0:00:00.814) 0:10:27.807 ****** 2025-02-04 09:41:25.415726 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415731 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415736 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415741 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415746 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415751 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415756 | orchestrator | 2025-02-04 09:41:25.415761 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.415766 | orchestrator | Tuesday 04 February 2025 09:37:43 +0000 (0:00:01.163) 0:10:28.971 ****** 2025-02-04 09:41:25.415771 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415776 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415781 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415785 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.415790 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.415795 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.415803 | orchestrator | 2025-02-04 09:41:25.415808 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.415813 | orchestrator | Tuesday 04 February 2025 09:37:44 +0000 (0:00:00.745) 0:10:29.716 ****** 2025-02-04 09:41:25.415818 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.415823 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.415828 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.415833 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.415838 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.415843 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.415848 | orchestrator | 2025-02-04 09:41:25.415853 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.415858 | orchestrator | Tuesday 04 February 2025 09:37:45 +0000 (0:00:01.237) 0:10:30.953 ****** 2025-02-04 09:41:25.415863 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415868 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415872 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415877 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415883 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415888 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415892 | orchestrator | 2025-02-04 09:41:25.415901 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.415906 | orchestrator | Tuesday 04 February 2025 09:37:46 +0000 (0:00:00.755) 0:10:31.709 ****** 2025-02-04 09:41:25.415911 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415916 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415921 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415926 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415931 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415936 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415941 | orchestrator | 2025-02-04 09:41:25.415946 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.415951 | orchestrator | Tuesday 04 February 2025 09:37:46 +0000 (0:00:00.845) 0:10:32.555 ****** 2025-02-04 09:41:25.415956 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.415961 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.415966 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.415970 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.415975 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.415980 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.415985 | orchestrator | 2025-02-04 09:41:25.415990 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.415995 | orchestrator | Tuesday 04 February 2025 09:37:47 +0000 (0:00:00.651) 0:10:33.207 ****** 2025-02-04 09:41:25.416000 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416005 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416010 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416015 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416020 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416025 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416030 | orchestrator | 2025-02-04 09:41:25.416035 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.416040 | orchestrator | Tuesday 04 February 2025 09:37:48 +0000 (0:00:00.873) 0:10:34.080 ****** 2025-02-04 09:41:25.416044 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416049 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416054 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416059 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416064 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416069 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416074 | orchestrator | 2025-02-04 09:41:25.416079 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.416084 | orchestrator | Tuesday 04 February 2025 09:37:49 +0000 (0:00:00.842) 0:10:34.923 ****** 2025-02-04 09:41:25.416089 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416094 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416099 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416104 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416108 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416113 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416118 | orchestrator | 2025-02-04 09:41:25.416123 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.416140 | orchestrator | Tuesday 04 February 2025 09:37:50 +0000 (0:00:01.079) 0:10:36.003 ****** 2025-02-04 09:41:25.416146 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416185 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416190 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416195 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416200 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416205 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416210 | orchestrator | 2025-02-04 09:41:25.416215 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.416221 | orchestrator | Tuesday 04 February 2025 09:37:51 +0000 (0:00:00.847) 0:10:36.850 ****** 2025-02-04 09:41:25.416229 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416234 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416239 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416247 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416252 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416257 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416262 | orchestrator | 2025-02-04 09:41:25.416267 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.416272 | orchestrator | Tuesday 04 February 2025 09:37:52 +0000 (0:00:01.314) 0:10:38.165 ****** 2025-02-04 09:41:25.416277 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416282 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416287 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416292 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416297 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416302 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416307 | orchestrator | 2025-02-04 09:41:25.416312 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.416317 | orchestrator | Tuesday 04 February 2025 09:37:53 +0000 (0:00:00.980) 0:10:39.146 ****** 2025-02-04 09:41:25.416322 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416327 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416332 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416337 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416342 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416346 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416351 | orchestrator | 2025-02-04 09:41:25.416356 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.416361 | orchestrator | Tuesday 04 February 2025 09:37:54 +0000 (0:00:01.080) 0:10:40.227 ****** 2025-02-04 09:41:25.416367 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416372 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416376 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416381 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416386 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416391 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416396 | orchestrator | 2025-02-04 09:41:25.416401 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.416406 | orchestrator | Tuesday 04 February 2025 09:37:55 +0000 (0:00:00.941) 0:10:41.169 ****** 2025-02-04 09:41:25.416411 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416416 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416421 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416426 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416431 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416436 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416441 | orchestrator | 2025-02-04 09:41:25.416446 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.416451 | orchestrator | Tuesday 04 February 2025 09:37:56 +0000 (0:00:01.094) 0:10:42.264 ****** 2025-02-04 09:41:25.416456 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.416461 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.416466 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.416471 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.416475 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416480 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.416485 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416490 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.416495 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.416500 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-02-04 09:41:25.416509 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416514 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.416519 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-02-04 09:41:25.416524 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416529 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416534 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.416539 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-02-04 09:41:25.416544 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416549 | orchestrator | 2025-02-04 09:41:25.416554 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.416562 | orchestrator | Tuesday 04 February 2025 09:37:57 +0000 (0:00:00.829) 0:10:43.093 ****** 2025-02-04 09:41:25.416567 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-04 09:41:25.416572 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-04 09:41:25.416576 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416581 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-04 09:41:25.416586 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-04 09:41:25.416591 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416597 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-04 09:41:25.416601 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-04 09:41:25.416606 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-02-04 09:41:25.416611 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-02-04 09:41:25.416630 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416636 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-02-04 09:41:25.416640 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-02-04 09:41:25.416646 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416650 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416656 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-02-04 09:41:25.416660 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-02-04 09:41:25.416665 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416670 | orchestrator | 2025-02-04 09:41:25.416675 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.416680 | orchestrator | Tuesday 04 February 2025 09:37:58 +0000 (0:00:01.204) 0:10:44.298 ****** 2025-02-04 09:41:25.416685 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416690 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416695 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416700 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416705 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416710 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416715 | orchestrator | 2025-02-04 09:41:25.416720 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.416725 | orchestrator | Tuesday 04 February 2025 09:37:59 +0000 (0:00:00.759) 0:10:45.058 ****** 2025-02-04 09:41:25.416730 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416735 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416740 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416745 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416750 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416755 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416764 | orchestrator | 2025-02-04 09:41:25.416769 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.416774 | orchestrator | Tuesday 04 February 2025 09:38:00 +0000 (0:00:01.079) 0:10:46.138 ****** 2025-02-04 09:41:25.416779 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416787 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416792 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416796 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416801 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416806 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416811 | orchestrator | 2025-02-04 09:41:25.416816 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.416821 | orchestrator | Tuesday 04 February 2025 09:38:01 +0000 (0:00:00.705) 0:10:46.844 ****** 2025-02-04 09:41:25.416826 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416831 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416836 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416841 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416846 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416851 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416856 | orchestrator | 2025-02-04 09:41:25.416861 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.416866 | orchestrator | Tuesday 04 February 2025 09:38:02 +0000 (0:00:00.929) 0:10:47.773 ****** 2025-02-04 09:41:25.416871 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416876 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416881 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416886 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416891 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416896 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416901 | orchestrator | 2025-02-04 09:41:25.416906 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.416911 | orchestrator | Tuesday 04 February 2025 09:38:02 +0000 (0:00:00.645) 0:10:48.418 ****** 2025-02-04 09:41:25.416916 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416921 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.416926 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.416931 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.416936 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.416941 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.416946 | orchestrator | 2025-02-04 09:41:25.416951 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.416956 | orchestrator | Tuesday 04 February 2025 09:38:03 +0000 (0:00:01.108) 0:10:49.527 ****** 2025-02-04 09:41:25.416961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.416966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.416971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.416976 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.416980 | orchestrator | 2025-02-04 09:41:25.416986 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.416991 | orchestrator | Tuesday 04 February 2025 09:38:04 +0000 (0:00:00.555) 0:10:50.083 ****** 2025-02-04 09:41:25.416995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.417000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.417005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.417010 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417015 | orchestrator | 2025-02-04 09:41:25.417020 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.417025 | orchestrator | Tuesday 04 February 2025 09:38:04 +0000 (0:00:00.471) 0:10:50.554 ****** 2025-02-04 09:41:25.417031 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.417035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.417041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.417046 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417056 | orchestrator | 2025-02-04 09:41:25.417061 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.417078 | orchestrator | Tuesday 04 February 2025 09:38:05 +0000 (0:00:00.468) 0:10:51.023 ****** 2025-02-04 09:41:25.417084 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417089 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417094 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417099 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417104 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417109 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417114 | orchestrator | 2025-02-04 09:41:25.417119 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.417124 | orchestrator | Tuesday 04 February 2025 09:38:06 +0000 (0:00:00.836) 0:10:51.860 ****** 2025-02-04 09:41:25.417128 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.417134 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417139 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.417143 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417148 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.417164 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417169 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.417174 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417179 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.417184 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417189 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.417194 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417199 | orchestrator | 2025-02-04 09:41:25.417204 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.417209 | orchestrator | Tuesday 04 February 2025 09:38:07 +0000 (0:00:00.888) 0:10:52.749 ****** 2025-02-04 09:41:25.417214 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417219 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417224 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417229 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417234 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417239 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417243 | orchestrator | 2025-02-04 09:41:25.417248 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.417253 | orchestrator | Tuesday 04 February 2025 09:38:08 +0000 (0:00:00.867) 0:10:53.616 ****** 2025-02-04 09:41:25.417258 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417263 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417268 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417273 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417278 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417283 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417288 | orchestrator | 2025-02-04 09:41:25.417293 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.417298 | orchestrator | Tuesday 04 February 2025 09:38:08 +0000 (0:00:00.674) 0:10:54.291 ****** 2025-02-04 09:41:25.417303 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.417308 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417316 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.417321 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417326 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.417331 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417336 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-02-04 09:41:25.417341 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417346 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-02-04 09:41:25.417351 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417359 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-02-04 09:41:25.417364 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417369 | orchestrator | 2025-02-04 09:41:25.417374 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.417379 | orchestrator | Tuesday 04 February 2025 09:38:09 +0000 (0:00:01.130) 0:10:55.422 ****** 2025-02-04 09:41:25.417384 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.417389 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417394 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.417399 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417404 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.417409 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417414 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417419 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417424 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417429 | orchestrator | 2025-02-04 09:41:25.417434 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.417439 | orchestrator | Tuesday 04 February 2025 09:38:10 +0000 (0:00:00.782) 0:10:56.204 ****** 2025-02-04 09:41:25.417444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.417451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.417456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.417461 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417466 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.417471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.417476 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.417481 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.417491 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.417496 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.417514 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-02-04 09:41:25.417528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-02-04 09:41:25.417533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-02-04 09:41:25.417538 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-02-04 09:41:25.417548 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-02-04 09:41:25.417553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-02-04 09:41:25.417558 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-02-04 09:41:25.417567 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-02-04 09:41:25.417572 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-02-04 09:41:25.417577 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417582 | orchestrator | 2025-02-04 09:41:25.417587 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.417592 | orchestrator | Tuesday 04 February 2025 09:38:12 +0000 (0:00:01.514) 0:10:57.719 ****** 2025-02-04 09:41:25.417597 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417602 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417607 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417612 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417620 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417625 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417630 | orchestrator | 2025-02-04 09:41:25.417635 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.417640 | orchestrator | Tuesday 04 February 2025 09:38:13 +0000 (0:00:01.337) 0:10:59.056 ****** 2025-02-04 09:41:25.417645 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.417650 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417655 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.417659 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417668 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.417673 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417678 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417683 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417688 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417693 | orchestrator | 2025-02-04 09:41:25.417698 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.417703 | orchestrator | Tuesday 04 February 2025 09:38:14 +0000 (0:00:01.495) 0:11:00.551 ****** 2025-02-04 09:41:25.417708 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417713 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417718 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417722 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417727 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417732 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417737 | orchestrator | 2025-02-04 09:41:25.417742 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.417747 | orchestrator | Tuesday 04 February 2025 09:38:16 +0000 (0:00:01.347) 0:11:01.899 ****** 2025-02-04 09:41:25.417752 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.417757 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.417762 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.417767 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:25.417772 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:25.417777 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:25.417782 | orchestrator | 2025-02-04 09:41:25.417787 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-02-04 09:41:25.417792 | orchestrator | Tuesday 04 February 2025 09:38:17 +0000 (0:00:01.425) 0:11:03.325 ****** 2025-02-04 09:41:25.417796 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.417801 | orchestrator | 2025-02-04 09:41:25.417806 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-02-04 09:41:25.417811 | orchestrator | Tuesday 04 February 2025 09:38:20 +0000 (0:00:02.888) 0:11:06.214 ****** 2025-02-04 09:41:25.417816 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.417821 | orchestrator | 2025-02-04 09:41:25.417826 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-02-04 09:41:25.417831 | orchestrator | Tuesday 04 February 2025 09:38:22 +0000 (0:00:01.835) 0:11:08.050 ****** 2025-02-04 09:41:25.417836 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.417841 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.417846 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.417851 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.417855 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.417860 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.417865 | orchestrator | 2025-02-04 09:41:25.417870 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-02-04 09:41:25.417875 | orchestrator | Tuesday 04 February 2025 09:38:23 +0000 (0:00:01.476) 0:11:09.526 ****** 2025-02-04 09:41:25.417880 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.417885 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.417893 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.417900 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.417905 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.417910 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.417915 | orchestrator | 2025-02-04 09:41:25.417920 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-02-04 09:41:25.417925 | orchestrator | Tuesday 04 February 2025 09:38:25 +0000 (0:00:01.302) 0:11:10.828 ****** 2025-02-04 09:41:25.417930 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.417936 | orchestrator | 2025-02-04 09:41:25.417941 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-02-04 09:41:25.417948 | orchestrator | Tuesday 04 February 2025 09:38:26 +0000 (0:00:01.428) 0:11:12.256 ****** 2025-02-04 09:41:25.417953 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.417958 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.417963 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.417968 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.417973 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.417978 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.417983 | orchestrator | 2025-02-04 09:41:25.417988 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-02-04 09:41:25.417993 | orchestrator | Tuesday 04 February 2025 09:38:28 +0000 (0:00:01.673) 0:11:13.930 ****** 2025-02-04 09:41:25.417997 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.418002 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.418007 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.418027 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.418033 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.418038 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.418043 | orchestrator | 2025-02-04 09:41:25.418048 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-02-04 09:41:25.418053 | orchestrator | Tuesday 04 February 2025 09:38:32 +0000 (0:00:04.194) 0:11:18.124 ****** 2025-02-04 09:41:25.418058 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:25.418063 | orchestrator | 2025-02-04 09:41:25.418068 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-02-04 09:41:25.418073 | orchestrator | Tuesday 04 February 2025 09:38:33 +0000 (0:00:01.159) 0:11:19.284 ****** 2025-02-04 09:41:25.418078 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418083 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418088 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418093 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.418098 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.418102 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.418107 | orchestrator | 2025-02-04 09:41:25.418112 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-02-04 09:41:25.418117 | orchestrator | Tuesday 04 February 2025 09:38:34 +0000 (0:00:00.898) 0:11:20.182 ****** 2025-02-04 09:41:25.418122 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.418127 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.418132 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.418137 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:25.418142 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:25.418147 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:25.418176 | orchestrator | 2025-02-04 09:41:25.418182 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-02-04 09:41:25.418187 | orchestrator | Tuesday 04 February 2025 09:38:36 +0000 (0:00:02.235) 0:11:22.417 ****** 2025-02-04 09:41:25.418192 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418197 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418206 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418211 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:25.418216 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:25.418221 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:25.418226 | orchestrator | 2025-02-04 09:41:25.418231 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-02-04 09:41:25.418236 | orchestrator | 2025-02-04 09:41:25.418241 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.418249 | orchestrator | Tuesday 04 February 2025 09:38:39 +0000 (0:00:02.885) 0:11:25.303 ****** 2025-02-04 09:41:25.418254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.418259 | orchestrator | 2025-02-04 09:41:25.418264 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.418269 | orchestrator | Tuesday 04 February 2025 09:38:40 +0000 (0:00:00.604) 0:11:25.907 ****** 2025-02-04 09:41:25.418274 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418279 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418283 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418288 | orchestrator | 2025-02-04 09:41:25.418293 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.418298 | orchestrator | Tuesday 04 February 2025 09:38:40 +0000 (0:00:00.351) 0:11:26.258 ****** 2025-02-04 09:41:25.418303 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418308 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418313 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418318 | orchestrator | 2025-02-04 09:41:25.418323 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.418328 | orchestrator | Tuesday 04 February 2025 09:38:41 +0000 (0:00:01.024) 0:11:27.283 ****** 2025-02-04 09:41:25.418333 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418338 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418343 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418348 | orchestrator | 2025-02-04 09:41:25.418353 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.418358 | orchestrator | Tuesday 04 February 2025 09:38:42 +0000 (0:00:00.724) 0:11:28.007 ****** 2025-02-04 09:41:25.418363 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418367 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418372 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418377 | orchestrator | 2025-02-04 09:41:25.418382 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.418387 | orchestrator | Tuesday 04 February 2025 09:38:43 +0000 (0:00:00.654) 0:11:28.662 ****** 2025-02-04 09:41:25.418392 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418397 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418402 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418407 | orchestrator | 2025-02-04 09:41:25.418412 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.418417 | orchestrator | Tuesday 04 February 2025 09:38:43 +0000 (0:00:00.499) 0:11:29.161 ****** 2025-02-04 09:41:25.418422 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418427 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418432 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418439 | orchestrator | 2025-02-04 09:41:25.418448 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.418454 | orchestrator | Tuesday 04 February 2025 09:38:43 +0000 (0:00:00.354) 0:11:29.516 ****** 2025-02-04 09:41:25.418459 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418464 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418469 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418473 | orchestrator | 2025-02-04 09:41:25.418478 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.418483 | orchestrator | Tuesday 04 February 2025 09:38:44 +0000 (0:00:00.337) 0:11:29.853 ****** 2025-02-04 09:41:25.418491 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418496 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418501 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418506 | orchestrator | 2025-02-04 09:41:25.418511 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.418516 | orchestrator | Tuesday 04 February 2025 09:38:44 +0000 (0:00:00.345) 0:11:30.198 ****** 2025-02-04 09:41:25.418521 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418525 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418530 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418535 | orchestrator | 2025-02-04 09:41:25.418540 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.418545 | orchestrator | Tuesday 04 February 2025 09:38:45 +0000 (0:00:00.548) 0:11:30.747 ****** 2025-02-04 09:41:25.418550 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418555 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418560 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418565 | orchestrator | 2025-02-04 09:41:25.418569 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.418574 | orchestrator | Tuesday 04 February 2025 09:38:45 +0000 (0:00:00.407) 0:11:31.154 ****** 2025-02-04 09:41:25.418579 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418584 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418589 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418594 | orchestrator | 2025-02-04 09:41:25.418599 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.418604 | orchestrator | Tuesday 04 February 2025 09:38:46 +0000 (0:00:00.688) 0:11:31.843 ****** 2025-02-04 09:41:25.418609 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418614 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418619 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418624 | orchestrator | 2025-02-04 09:41:25.418629 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.418634 | orchestrator | Tuesday 04 February 2025 09:38:46 +0000 (0:00:00.367) 0:11:32.211 ****** 2025-02-04 09:41:25.418638 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418643 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418648 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418653 | orchestrator | 2025-02-04 09:41:25.418658 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.418663 | orchestrator | Tuesday 04 February 2025 09:38:47 +0000 (0:00:00.646) 0:11:32.857 ****** 2025-02-04 09:41:25.418668 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418673 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418678 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418683 | orchestrator | 2025-02-04 09:41:25.418688 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.418693 | orchestrator | Tuesday 04 February 2025 09:38:47 +0000 (0:00:00.379) 0:11:33.237 ****** 2025-02-04 09:41:25.418698 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418702 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418707 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418712 | orchestrator | 2025-02-04 09:41:25.418717 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.418726 | orchestrator | Tuesday 04 February 2025 09:38:48 +0000 (0:00:00.413) 0:11:33.651 ****** 2025-02-04 09:41:25.418732 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418736 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418741 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418746 | orchestrator | 2025-02-04 09:41:25.418751 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.418756 | orchestrator | Tuesday 04 February 2025 09:38:48 +0000 (0:00:00.368) 0:11:34.019 ****** 2025-02-04 09:41:25.418761 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418769 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418774 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418779 | orchestrator | 2025-02-04 09:41:25.418784 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.418789 | orchestrator | Tuesday 04 February 2025 09:38:49 +0000 (0:00:00.612) 0:11:34.632 ****** 2025-02-04 09:41:25.418794 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418799 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418804 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418808 | orchestrator | 2025-02-04 09:41:25.418813 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.418818 | orchestrator | Tuesday 04 February 2025 09:38:49 +0000 (0:00:00.401) 0:11:35.033 ****** 2025-02-04 09:41:25.418823 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418828 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418833 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418838 | orchestrator | 2025-02-04 09:41:25.418843 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.418848 | orchestrator | Tuesday 04 February 2025 09:38:49 +0000 (0:00:00.324) 0:11:35.357 ****** 2025-02-04 09:41:25.418853 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.418858 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.418863 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.418868 | orchestrator | 2025-02-04 09:41:25.418873 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.418878 | orchestrator | Tuesday 04 February 2025 09:38:50 +0000 (0:00:00.344) 0:11:35.702 ****** 2025-02-04 09:41:25.418883 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418888 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418892 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418897 | orchestrator | 2025-02-04 09:41:25.418905 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.418910 | orchestrator | Tuesday 04 February 2025 09:38:50 +0000 (0:00:00.628) 0:11:36.330 ****** 2025-02-04 09:41:25.418915 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418920 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418925 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418933 | orchestrator | 2025-02-04 09:41:25.418938 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.418943 | orchestrator | Tuesday 04 February 2025 09:38:51 +0000 (0:00:00.360) 0:11:36.691 ****** 2025-02-04 09:41:25.418948 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418953 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418957 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418965 | orchestrator | 2025-02-04 09:41:25.418970 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.418975 | orchestrator | Tuesday 04 February 2025 09:38:51 +0000 (0:00:00.354) 0:11:37.046 ****** 2025-02-04 09:41:25.418980 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.418985 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.418990 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.418994 | orchestrator | 2025-02-04 09:41:25.418999 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.419004 | orchestrator | Tuesday 04 February 2025 09:38:51 +0000 (0:00:00.442) 0:11:37.488 ****** 2025-02-04 09:41:25.419009 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419014 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419019 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419024 | orchestrator | 2025-02-04 09:41:25.419029 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.419034 | orchestrator | Tuesday 04 February 2025 09:38:52 +0000 (0:00:00.623) 0:11:38.112 ****** 2025-02-04 09:41:25.419039 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419044 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419052 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419057 | orchestrator | 2025-02-04 09:41:25.419062 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.419067 | orchestrator | Tuesday 04 February 2025 09:38:52 +0000 (0:00:00.288) 0:11:38.400 ****** 2025-02-04 09:41:25.419072 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419076 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419081 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419086 | orchestrator | 2025-02-04 09:41:25.419091 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.419096 | orchestrator | Tuesday 04 February 2025 09:38:53 +0000 (0:00:00.420) 0:11:38.820 ****** 2025-02-04 09:41:25.419101 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419106 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419111 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419116 | orchestrator | 2025-02-04 09:41:25.419121 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.419126 | orchestrator | Tuesday 04 February 2025 09:38:53 +0000 (0:00:00.383) 0:11:39.204 ****** 2025-02-04 09:41:25.419131 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419136 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419141 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419146 | orchestrator | 2025-02-04 09:41:25.419162 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.419167 | orchestrator | Tuesday 04 February 2025 09:38:54 +0000 (0:00:00.577) 0:11:39.782 ****** 2025-02-04 09:41:25.419172 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419177 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419182 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419187 | orchestrator | 2025-02-04 09:41:25.419192 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.419197 | orchestrator | Tuesday 04 February 2025 09:38:54 +0000 (0:00:00.316) 0:11:40.099 ****** 2025-02-04 09:41:25.419202 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419206 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419211 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419216 | orchestrator | 2025-02-04 09:41:25.419221 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.419226 | orchestrator | Tuesday 04 February 2025 09:38:54 +0000 (0:00:00.285) 0:11:40.384 ****** 2025-02-04 09:41:25.419231 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419236 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419241 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419246 | orchestrator | 2025-02-04 09:41:25.419251 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.419256 | orchestrator | Tuesday 04 February 2025 09:38:55 +0000 (0:00:00.279) 0:11:40.664 ****** 2025-02-04 09:41:25.419261 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.419266 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.419271 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419276 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.419281 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.419286 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419291 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.419296 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.419301 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419305 | orchestrator | 2025-02-04 09:41:25.419313 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.419318 | orchestrator | Tuesday 04 February 2025 09:38:55 +0000 (0:00:00.574) 0:11:41.238 ****** 2025-02-04 09:41:25.419323 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-04 09:41:25.419331 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-04 09:41:25.419336 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419341 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-04 09:41:25.419349 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-04 09:41:25.419354 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419359 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-04 09:41:25.419364 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-04 09:41:25.419369 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419374 | orchestrator | 2025-02-04 09:41:25.419379 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.419384 | orchestrator | Tuesday 04 February 2025 09:38:56 +0000 (0:00:00.352) 0:11:41.591 ****** 2025-02-04 09:41:25.419388 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419393 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419398 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419403 | orchestrator | 2025-02-04 09:41:25.419408 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.419413 | orchestrator | Tuesday 04 February 2025 09:38:56 +0000 (0:00:00.268) 0:11:41.859 ****** 2025-02-04 09:41:25.419418 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419423 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419428 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419433 | orchestrator | 2025-02-04 09:41:25.419438 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.419443 | orchestrator | Tuesday 04 February 2025 09:38:56 +0000 (0:00:00.288) 0:11:42.147 ****** 2025-02-04 09:41:25.419448 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419453 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419458 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419463 | orchestrator | 2025-02-04 09:41:25.419468 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.419473 | orchestrator | Tuesday 04 February 2025 09:38:57 +0000 (0:00:00.521) 0:11:42.668 ****** 2025-02-04 09:41:25.419478 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419483 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419487 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419492 | orchestrator | 2025-02-04 09:41:25.419497 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.419502 | orchestrator | Tuesday 04 February 2025 09:38:57 +0000 (0:00:00.279) 0:11:42.948 ****** 2025-02-04 09:41:25.419507 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419512 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419517 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419522 | orchestrator | 2025-02-04 09:41:25.419527 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.419532 | orchestrator | Tuesday 04 February 2025 09:38:57 +0000 (0:00:00.297) 0:11:43.245 ****** 2025-02-04 09:41:25.419537 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419542 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419547 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419552 | orchestrator | 2025-02-04 09:41:25.419557 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.419562 | orchestrator | Tuesday 04 February 2025 09:38:57 +0000 (0:00:00.286) 0:11:43.532 ****** 2025-02-04 09:41:25.419567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.419572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.419577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.419582 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419593 | orchestrator | 2025-02-04 09:41:25.419598 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.419603 | orchestrator | Tuesday 04 February 2025 09:38:58 +0000 (0:00:00.725) 0:11:44.257 ****** 2025-02-04 09:41:25.419608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.419613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.419618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.419623 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419629 | orchestrator | 2025-02-04 09:41:25.419634 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.419639 | orchestrator | Tuesday 04 February 2025 09:38:59 +0000 (0:00:00.815) 0:11:45.073 ****** 2025-02-04 09:41:25.419644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.419649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.419654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.419659 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419664 | orchestrator | 2025-02-04 09:41:25.419669 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.419674 | orchestrator | Tuesday 04 February 2025 09:38:59 +0000 (0:00:00.417) 0:11:45.490 ****** 2025-02-04 09:41:25.419679 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419684 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419689 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419694 | orchestrator | 2025-02-04 09:41:25.419699 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.419704 | orchestrator | Tuesday 04 February 2025 09:39:00 +0000 (0:00:00.374) 0:11:45.865 ****** 2025-02-04 09:41:25.419709 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.419714 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419719 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.419724 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419729 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.419734 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419739 | orchestrator | 2025-02-04 09:41:25.419744 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.419749 | orchestrator | Tuesday 04 February 2025 09:39:00 +0000 (0:00:00.508) 0:11:46.373 ****** 2025-02-04 09:41:25.419753 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419760 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419766 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419771 | orchestrator | 2025-02-04 09:41:25.419775 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.419780 | orchestrator | Tuesday 04 February 2025 09:39:01 +0000 (0:00:00.360) 0:11:46.734 ****** 2025-02-04 09:41:25.419785 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419790 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419795 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419800 | orchestrator | 2025-02-04 09:41:25.419805 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.419810 | orchestrator | Tuesday 04 February 2025 09:39:01 +0000 (0:00:00.718) 0:11:47.453 ****** 2025-02-04 09:41:25.419815 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.419820 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419825 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.419830 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419834 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.419839 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419844 | orchestrator | 2025-02-04 09:41:25.419849 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.419857 | orchestrator | Tuesday 04 February 2025 09:39:02 +0000 (0:00:00.523) 0:11:47.977 ****** 2025-02-04 09:41:25.419862 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.419867 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419872 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.419877 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419882 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.419887 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419892 | orchestrator | 2025-02-04 09:41:25.419897 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.419905 | orchestrator | Tuesday 04 February 2025 09:39:02 +0000 (0:00:00.432) 0:11:48.410 ****** 2025-02-04 09:41:25.419910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.419914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.419919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.419924 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419929 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.419934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.419939 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.419944 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.419954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.419959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.419964 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419969 | orchestrator | 2025-02-04 09:41:25.419974 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.419979 | orchestrator | Tuesday 04 February 2025 09:39:03 +0000 (0:00:01.142) 0:11:49.552 ****** 2025-02-04 09:41:25.419984 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.419989 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.419993 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.419998 | orchestrator | 2025-02-04 09:41:25.420003 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.420008 | orchestrator | Tuesday 04 February 2025 09:39:04 +0000 (0:00:00.676) 0:11:50.229 ****** 2025-02-04 09:41:25.420013 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.420018 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420023 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.420028 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.420036 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.420041 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.420046 | orchestrator | 2025-02-04 09:41:25.420051 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.420056 | orchestrator | Tuesday 04 February 2025 09:39:05 +0000 (0:00:01.054) 0:11:51.284 ****** 2025-02-04 09:41:25.420061 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420066 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.420071 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.420076 | orchestrator | 2025-02-04 09:41:25.420081 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.420086 | orchestrator | Tuesday 04 February 2025 09:39:06 +0000 (0:00:00.630) 0:11:51.915 ****** 2025-02-04 09:41:25.420091 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420096 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.420103 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.420108 | orchestrator | 2025-02-04 09:41:25.420113 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-02-04 09:41:25.420118 | orchestrator | Tuesday 04 February 2025 09:39:07 +0000 (0:00:00.975) 0:11:52.890 ****** 2025-02-04 09:41:25.420123 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.420128 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.420133 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-02-04 09:41:25.420138 | orchestrator | 2025-02-04 09:41:25.420143 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-02-04 09:41:25.420148 | orchestrator | Tuesday 04 February 2025 09:39:07 +0000 (0:00:00.533) 0:11:53.424 ****** 2025-02-04 09:41:25.420180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.420186 | orchestrator | 2025-02-04 09:41:25.420191 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-02-04 09:41:25.420196 | orchestrator | Tuesday 04 February 2025 09:39:09 +0000 (0:00:01.841) 0:11:55.266 ****** 2025-02-04 09:41:25.420202 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-02-04 09:41:25.420209 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420214 | orchestrator | 2025-02-04 09:41:25.420219 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-02-04 09:41:25.420224 | orchestrator | Tuesday 04 February 2025 09:39:10 +0000 (0:00:00.676) 0:11:55.942 ****** 2025-02-04 09:41:25.420230 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:41:25.420236 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:41:25.420241 | orchestrator | 2025-02-04 09:41:25.420246 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-02-04 09:41:25.420251 | orchestrator | Tuesday 04 February 2025 09:39:17 +0000 (0:00:06.844) 0:12:02.787 ****** 2025-02-04 09:41:25.420256 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-02-04 09:41:25.420261 | orchestrator | 2025-02-04 09:41:25.420266 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-02-04 09:41:25.420270 | orchestrator | Tuesday 04 February 2025 09:39:20 +0000 (0:00:02.940) 0:12:05.727 ****** 2025-02-04 09:41:25.420275 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.420280 | orchestrator | 2025-02-04 09:41:25.420285 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-02-04 09:41:25.420290 | orchestrator | Tuesday 04 February 2025 09:39:20 +0000 (0:00:00.660) 0:12:06.387 ****** 2025-02-04 09:41:25.420297 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-04 09:41:25.420302 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-04 09:41:25.420307 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-02-04 09:41:25.420312 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-02-04 09:41:25.420317 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-02-04 09:41:25.420322 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-02-04 09:41:25.420327 | orchestrator | 2025-02-04 09:41:25.420332 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-02-04 09:41:25.420340 | orchestrator | Tuesday 04 February 2025 09:39:22 +0000 (0:00:01.171) 0:12:07.559 ****** 2025-02-04 09:41:25.420345 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:41:25.420350 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.420355 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-04 09:41:25.420360 | orchestrator | 2025-02-04 09:41:25.420365 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-02-04 09:41:25.420370 | orchestrator | Tuesday 04 February 2025 09:39:24 +0000 (0:00:02.262) 0:12:09.821 ****** 2025-02-04 09:41:25.420375 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-04 09:41:25.420380 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.420385 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420390 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-04 09:41:25.420395 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.420400 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420405 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-04 09:41:25.420410 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.420415 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420420 | orchestrator | 2025-02-04 09:41:25.420425 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-02-04 09:41:25.420434 | orchestrator | Tuesday 04 February 2025 09:39:25 +0000 (0:00:01.402) 0:12:11.224 ****** 2025-02-04 09:41:25.420439 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420444 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.420449 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.420454 | orchestrator | 2025-02-04 09:41:25.420459 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-02-04 09:41:25.420464 | orchestrator | Tuesday 04 February 2025 09:39:26 +0000 (0:00:00.424) 0:12:11.649 ****** 2025-02-04 09:41:25.420469 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.420474 | orchestrator | 2025-02-04 09:41:25.420478 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-02-04 09:41:25.420483 | orchestrator | Tuesday 04 February 2025 09:39:27 +0000 (0:00:00.964) 0:12:12.613 ****** 2025-02-04 09:41:25.420490 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.420496 | orchestrator | 2025-02-04 09:41:25.420501 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-02-04 09:41:25.420506 | orchestrator | Tuesday 04 February 2025 09:39:27 +0000 (0:00:00.667) 0:12:13.281 ****** 2025-02-04 09:41:25.420510 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420515 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420520 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420525 | orchestrator | 2025-02-04 09:41:25.420530 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-02-04 09:41:25.420535 | orchestrator | Tuesday 04 February 2025 09:39:29 +0000 (0:00:01.601) 0:12:14.882 ****** 2025-02-04 09:41:25.420540 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420545 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420550 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420557 | orchestrator | 2025-02-04 09:41:25.420562 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-02-04 09:41:25.420567 | orchestrator | Tuesday 04 February 2025 09:39:30 +0000 (0:00:01.353) 0:12:16.236 ****** 2025-02-04 09:41:25.420572 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420577 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420582 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420587 | orchestrator | 2025-02-04 09:41:25.420592 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-02-04 09:41:25.420600 | orchestrator | Tuesday 04 February 2025 09:39:32 +0000 (0:00:01.932) 0:12:18.168 ****** 2025-02-04 09:41:25.420605 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420610 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420615 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420620 | orchestrator | 2025-02-04 09:41:25.420625 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-02-04 09:41:25.420629 | orchestrator | Tuesday 04 February 2025 09:39:34 +0000 (0:00:02.210) 0:12:20.379 ****** 2025-02-04 09:41:25.420634 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-02-04 09:41:25.420639 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-02-04 09:41:25.420644 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-02-04 09:41:25.420649 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.420654 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.420659 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.420664 | orchestrator | 2025-02-04 09:41:25.420669 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-04 09:41:25.420674 | orchestrator | Tuesday 04 February 2025 09:39:52 +0000 (0:00:17.627) 0:12:38.006 ****** 2025-02-04 09:41:25.420679 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420684 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420689 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420694 | orchestrator | 2025-02-04 09:41:25.420699 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-02-04 09:41:25.420704 | orchestrator | Tuesday 04 February 2025 09:39:53 +0000 (0:00:00.807) 0:12:38.813 ****** 2025-02-04 09:41:25.420709 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.420714 | orchestrator | 2025-02-04 09:41:25.420719 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-02-04 09:41:25.420724 | orchestrator | Tuesday 04 February 2025 09:39:53 +0000 (0:00:00.620) 0:12:39.434 ****** 2025-02-04 09:41:25.420729 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.420734 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.420739 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.420744 | orchestrator | 2025-02-04 09:41:25.420748 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-02-04 09:41:25.420753 | orchestrator | Tuesday 04 February 2025 09:39:54 +0000 (0:00:00.740) 0:12:40.175 ****** 2025-02-04 09:41:25.420758 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420763 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420768 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420773 | orchestrator | 2025-02-04 09:41:25.420778 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-02-04 09:41:25.420783 | orchestrator | Tuesday 04 February 2025 09:39:55 +0000 (0:00:01.346) 0:12:41.521 ****** 2025-02-04 09:41:25.420788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.420793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.420798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.420803 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420808 | orchestrator | 2025-02-04 09:41:25.420813 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-02-04 09:41:25.420818 | orchestrator | Tuesday 04 February 2025 09:39:57 +0000 (0:00:01.198) 0:12:42.720 ****** 2025-02-04 09:41:25.420823 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.420828 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.420833 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.420838 | orchestrator | 2025-02-04 09:41:25.420843 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.420848 | orchestrator | Tuesday 04 February 2025 09:39:57 +0000 (0:00:00.486) 0:12:43.206 ****** 2025-02-04 09:41:25.420855 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.420860 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.420865 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.420870 | orchestrator | 2025-02-04 09:41:25.420878 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-02-04 09:41:25.420883 | orchestrator | 2025-02-04 09:41:25.420888 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-02-04 09:41:25.420893 | orchestrator | Tuesday 04 February 2025 09:40:00 +0000 (0:00:02.957) 0:12:46.163 ****** 2025-02-04 09:41:25.420900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.420905 | orchestrator | 2025-02-04 09:41:25.420910 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-02-04 09:41:25.420915 | orchestrator | Tuesday 04 February 2025 09:40:01 +0000 (0:00:00.658) 0:12:46.821 ****** 2025-02-04 09:41:25.420919 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.420924 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.420929 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.420934 | orchestrator | 2025-02-04 09:41:25.420939 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-02-04 09:41:25.420944 | orchestrator | Tuesday 04 February 2025 09:40:01 +0000 (0:00:00.690) 0:12:47.512 ****** 2025-02-04 09:41:25.420949 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.420954 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.420959 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.420964 | orchestrator | 2025-02-04 09:41:25.420969 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-02-04 09:41:25.420973 | orchestrator | Tuesday 04 February 2025 09:40:02 +0000 (0:00:00.774) 0:12:48.287 ****** 2025-02-04 09:41:25.420978 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.420983 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.420988 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.420993 | orchestrator | 2025-02-04 09:41:25.420998 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-02-04 09:41:25.421003 | orchestrator | Tuesday 04 February 2025 09:40:03 +0000 (0:00:00.777) 0:12:49.064 ****** 2025-02-04 09:41:25.421008 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.421013 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.421018 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.421023 | orchestrator | 2025-02-04 09:41:25.421028 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-02-04 09:41:25.421033 | orchestrator | Tuesday 04 February 2025 09:40:04 +0000 (0:00:01.131) 0:12:50.195 ****** 2025-02-04 09:41:25.421037 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421042 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421047 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421052 | orchestrator | 2025-02-04 09:41:25.421057 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-02-04 09:41:25.421062 | orchestrator | Tuesday 04 February 2025 09:40:05 +0000 (0:00:00.397) 0:12:50.593 ****** 2025-02-04 09:41:25.421067 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421072 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421077 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421082 | orchestrator | 2025-02-04 09:41:25.421087 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-02-04 09:41:25.421092 | orchestrator | Tuesday 04 February 2025 09:40:05 +0000 (0:00:00.441) 0:12:51.034 ****** 2025-02-04 09:41:25.421097 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421102 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421107 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421111 | orchestrator | 2025-02-04 09:41:25.421116 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-02-04 09:41:25.421121 | orchestrator | Tuesday 04 February 2025 09:40:05 +0000 (0:00:00.367) 0:12:51.402 ****** 2025-02-04 09:41:25.421129 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421134 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421139 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421144 | orchestrator | 2025-02-04 09:41:25.421149 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-02-04 09:41:25.421166 | orchestrator | Tuesday 04 February 2025 09:40:06 +0000 (0:00:00.745) 0:12:52.147 ****** 2025-02-04 09:41:25.421171 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421176 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421184 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421189 | orchestrator | 2025-02-04 09:41:25.421194 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-02-04 09:41:25.421199 | orchestrator | Tuesday 04 February 2025 09:40:07 +0000 (0:00:00.446) 0:12:52.594 ****** 2025-02-04 09:41:25.421203 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421208 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421213 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421218 | orchestrator | 2025-02-04 09:41:25.421223 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-02-04 09:41:25.421228 | orchestrator | Tuesday 04 February 2025 09:40:07 +0000 (0:00:00.410) 0:12:53.005 ****** 2025-02-04 09:41:25.421233 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.421238 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.421243 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.421248 | orchestrator | 2025-02-04 09:41:25.421253 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-02-04 09:41:25.421258 | orchestrator | Tuesday 04 February 2025 09:40:08 +0000 (0:00:00.883) 0:12:53.888 ****** 2025-02-04 09:41:25.421263 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421268 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421273 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421278 | orchestrator | 2025-02-04 09:41:25.421283 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-02-04 09:41:25.421288 | orchestrator | Tuesday 04 February 2025 09:40:09 +0000 (0:00:00.689) 0:12:54.578 ****** 2025-02-04 09:41:25.421293 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421298 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421303 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421308 | orchestrator | 2025-02-04 09:41:25.421313 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-02-04 09:41:25.421318 | orchestrator | Tuesday 04 February 2025 09:40:09 +0000 (0:00:00.395) 0:12:54.973 ****** 2025-02-04 09:41:25.421322 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.421327 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.421332 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.421337 | orchestrator | 2025-02-04 09:41:25.421345 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-02-04 09:41:25.421350 | orchestrator | Tuesday 04 February 2025 09:40:09 +0000 (0:00:00.451) 0:12:55.425 ****** 2025-02-04 09:41:25.421355 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.421360 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.421365 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.421370 | orchestrator | 2025-02-04 09:41:25.421377 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-02-04 09:41:25.421382 | orchestrator | Tuesday 04 February 2025 09:40:10 +0000 (0:00:00.464) 0:12:55.889 ****** 2025-02-04 09:41:25.421387 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.421392 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.421396 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.421401 | orchestrator | 2025-02-04 09:41:25.421406 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-02-04 09:41:25.421411 | orchestrator | Tuesday 04 February 2025 09:40:10 +0000 (0:00:00.587) 0:12:56.477 ****** 2025-02-04 09:41:25.421416 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421424 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421429 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421434 | orchestrator | 2025-02-04 09:41:25.421439 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-02-04 09:41:25.421444 | orchestrator | Tuesday 04 February 2025 09:40:11 +0000 (0:00:00.325) 0:12:56.802 ****** 2025-02-04 09:41:25.421449 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421454 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421459 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421464 | orchestrator | 2025-02-04 09:41:25.421468 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-02-04 09:41:25.421473 | orchestrator | Tuesday 04 February 2025 09:40:11 +0000 (0:00:00.352) 0:12:57.155 ****** 2025-02-04 09:41:25.421478 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421483 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421488 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421493 | orchestrator | 2025-02-04 09:41:25.421498 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-02-04 09:41:25.421503 | orchestrator | Tuesday 04 February 2025 09:40:11 +0000 (0:00:00.315) 0:12:57.471 ****** 2025-02-04 09:41:25.421508 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.421513 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.421518 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.421523 | orchestrator | 2025-02-04 09:41:25.421528 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-02-04 09:41:25.421533 | orchestrator | Tuesday 04 February 2025 09:40:12 +0000 (0:00:00.578) 0:12:58.049 ****** 2025-02-04 09:41:25.421538 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421543 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421548 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421553 | orchestrator | 2025-02-04 09:41:25.421558 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-02-04 09:41:25.421563 | orchestrator | Tuesday 04 February 2025 09:40:12 +0000 (0:00:00.345) 0:12:58.395 ****** 2025-02-04 09:41:25.421568 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421573 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421577 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421582 | orchestrator | 2025-02-04 09:41:25.421587 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-02-04 09:41:25.421592 | orchestrator | Tuesday 04 February 2025 09:40:13 +0000 (0:00:00.368) 0:12:58.763 ****** 2025-02-04 09:41:25.421600 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421605 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421610 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421615 | orchestrator | 2025-02-04 09:41:25.421620 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-02-04 09:41:25.421625 | orchestrator | Tuesday 04 February 2025 09:40:13 +0000 (0:00:00.315) 0:12:59.079 ****** 2025-02-04 09:41:25.421630 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421635 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421640 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421645 | orchestrator | 2025-02-04 09:41:25.421650 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-02-04 09:41:25.421654 | orchestrator | Tuesday 04 February 2025 09:40:14 +0000 (0:00:00.566) 0:12:59.645 ****** 2025-02-04 09:41:25.421659 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421664 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421669 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421674 | orchestrator | 2025-02-04 09:41:25.421679 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-02-04 09:41:25.421684 | orchestrator | Tuesday 04 February 2025 09:40:14 +0000 (0:00:00.356) 0:13:00.002 ****** 2025-02-04 09:41:25.421689 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421694 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421704 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421709 | orchestrator | 2025-02-04 09:41:25.421714 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-02-04 09:41:25.421719 | orchestrator | Tuesday 04 February 2025 09:40:14 +0000 (0:00:00.317) 0:13:00.320 ****** 2025-02-04 09:41:25.421724 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421729 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421734 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421739 | orchestrator | 2025-02-04 09:41:25.421745 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-02-04 09:41:25.421750 | orchestrator | Tuesday 04 February 2025 09:40:15 +0000 (0:00:00.348) 0:13:00.669 ****** 2025-02-04 09:41:25.421755 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421760 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421765 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421770 | orchestrator | 2025-02-04 09:41:25.421775 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-02-04 09:41:25.421780 | orchestrator | Tuesday 04 February 2025 09:40:15 +0000 (0:00:00.602) 0:13:01.271 ****** 2025-02-04 09:41:25.421785 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421790 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421795 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421800 | orchestrator | 2025-02-04 09:41:25.421805 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-02-04 09:41:25.421810 | orchestrator | Tuesday 04 February 2025 09:40:16 +0000 (0:00:00.354) 0:13:01.625 ****** 2025-02-04 09:41:25.421818 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421824 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421829 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421834 | orchestrator | 2025-02-04 09:41:25.421839 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-02-04 09:41:25.421844 | orchestrator | Tuesday 04 February 2025 09:40:16 +0000 (0:00:00.333) 0:13:01.959 ****** 2025-02-04 09:41:25.421849 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421854 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421859 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421864 | orchestrator | 2025-02-04 09:41:25.421869 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-02-04 09:41:25.421874 | orchestrator | Tuesday 04 February 2025 09:40:16 +0000 (0:00:00.330) 0:13:02.289 ****** 2025-02-04 09:41:25.421879 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421884 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421889 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421893 | orchestrator | 2025-02-04 09:41:25.421901 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-02-04 09:41:25.421906 | orchestrator | Tuesday 04 February 2025 09:40:17 +0000 (0:00:00.788) 0:13:03.078 ****** 2025-02-04 09:41:25.421911 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.421916 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-02-04 09:41:25.421921 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421926 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.421930 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-02-04 09:41:25.421935 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.421940 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.421945 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-02-04 09:41:25.421950 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.421955 | orchestrator | 2025-02-04 09:41:25.421960 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-02-04 09:41:25.421965 | orchestrator | Tuesday 04 February 2025 09:40:18 +0000 (0:00:00.486) 0:13:03.564 ****** 2025-02-04 09:41:25.421970 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-02-04 09:41:25.421978 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-02-04 09:41:25.421983 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.421988 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-02-04 09:41:25.421993 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-02-04 09:41:25.421998 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422003 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-02-04 09:41:25.422008 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-02-04 09:41:25.422028 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422034 | orchestrator | 2025-02-04 09:41:25.422039 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-02-04 09:41:25.422044 | orchestrator | Tuesday 04 February 2025 09:40:18 +0000 (0:00:00.446) 0:13:04.010 ****** 2025-02-04 09:41:25.422049 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422054 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422059 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422064 | orchestrator | 2025-02-04 09:41:25.422068 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-02-04 09:41:25.422073 | orchestrator | Tuesday 04 February 2025 09:40:18 +0000 (0:00:00.396) 0:13:04.406 ****** 2025-02-04 09:41:25.422078 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422083 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422088 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422093 | orchestrator | 2025-02-04 09:41:25.422098 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:41:25.422103 | orchestrator | Tuesday 04 February 2025 09:40:19 +0000 (0:00:00.743) 0:13:05.150 ****** 2025-02-04 09:41:25.422108 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422113 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422118 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422123 | orchestrator | 2025-02-04 09:41:25.422128 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:41:25.422133 | orchestrator | Tuesday 04 February 2025 09:40:20 +0000 (0:00:00.410) 0:13:05.560 ****** 2025-02-04 09:41:25.422138 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422143 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422147 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422224 | orchestrator | 2025-02-04 09:41:25.422230 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:41:25.422235 | orchestrator | Tuesday 04 February 2025 09:40:20 +0000 (0:00:00.501) 0:13:06.062 ****** 2025-02-04 09:41:25.422240 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422245 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422249 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422254 | orchestrator | 2025-02-04 09:41:25.422259 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:41:25.422264 | orchestrator | Tuesday 04 February 2025 09:40:20 +0000 (0:00:00.457) 0:13:06.519 ****** 2025-02-04 09:41:25.422269 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422274 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422279 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422284 | orchestrator | 2025-02-04 09:41:25.422288 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:41:25.422293 | orchestrator | Tuesday 04 February 2025 09:40:21 +0000 (0:00:00.756) 0:13:07.276 ****** 2025-02-04 09:41:25.422298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.422303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.422308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.422313 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422323 | orchestrator | 2025-02-04 09:41:25.422328 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:41:25.422336 | orchestrator | Tuesday 04 February 2025 09:40:22 +0000 (0:00:00.596) 0:13:07.872 ****** 2025-02-04 09:41:25.422341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.422346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.422351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.422356 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422360 | orchestrator | 2025-02-04 09:41:25.422365 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:41:25.422370 | orchestrator | Tuesday 04 February 2025 09:40:22 +0000 (0:00:00.657) 0:13:08.530 ****** 2025-02-04 09:41:25.422375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.422380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.422385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.422390 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422395 | orchestrator | 2025-02-04 09:41:25.422400 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.422405 | orchestrator | Tuesday 04 February 2025 09:40:23 +0000 (0:00:00.615) 0:13:09.146 ****** 2025-02-04 09:41:25.422410 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422415 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422420 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422425 | orchestrator | 2025-02-04 09:41:25.422430 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:41:25.422435 | orchestrator | Tuesday 04 February 2025 09:40:24 +0000 (0:00:00.416) 0:13:09.562 ****** 2025-02-04 09:41:25.422440 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.422445 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422449 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.422454 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422460 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.422464 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422473 | orchestrator | 2025-02-04 09:41:25.422478 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:41:25.422483 | orchestrator | Tuesday 04 February 2025 09:40:24 +0000 (0:00:00.514) 0:13:10.077 ****** 2025-02-04 09:41:25.422488 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422493 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422498 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422503 | orchestrator | 2025-02-04 09:41:25.422508 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:41:25.422513 | orchestrator | Tuesday 04 February 2025 09:40:25 +0000 (0:00:00.722) 0:13:10.799 ****** 2025-02-04 09:41:25.422518 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422522 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422527 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422532 | orchestrator | 2025-02-04 09:41:25.422537 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:41:25.422542 | orchestrator | Tuesday 04 February 2025 09:40:25 +0000 (0:00:00.508) 0:13:11.307 ****** 2025-02-04 09:41:25.422547 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:41:25.422552 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422557 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:41:25.422562 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422566 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:41:25.422571 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422576 | orchestrator | 2025-02-04 09:41:25.422581 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:41:25.422589 | orchestrator | Tuesday 04 February 2025 09:40:26 +0000 (0:00:00.578) 0:13:11.885 ****** 2025-02-04 09:41:25.422596 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.422601 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422606 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.422611 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422616 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:41:25.422621 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422626 | orchestrator | 2025-02-04 09:41:25.422631 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:41:25.422635 | orchestrator | Tuesday 04 February 2025 09:40:26 +0000 (0:00:00.386) 0:13:12.272 ****** 2025-02-04 09:41:25.422640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.422645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.422650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.422655 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422660 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:41:25.422665 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:41:25.422669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:41:25.422674 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:41:25.422684 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:41:25.422689 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:41:25.422694 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422699 | orchestrator | 2025-02-04 09:41:25.422703 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-02-04 09:41:25.422710 | orchestrator | Tuesday 04 February 2025 09:40:27 +0000 (0:00:01.139) 0:13:13.411 ****** 2025-02-04 09:41:25.422715 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422720 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422725 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422730 | orchestrator | 2025-02-04 09:41:25.422735 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-02-04 09:41:25.422740 | orchestrator | Tuesday 04 February 2025 09:40:28 +0000 (0:00:00.624) 0:13:14.036 ****** 2025-02-04 09:41:25.422745 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.422750 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422755 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.422760 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422765 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.422769 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422774 | orchestrator | 2025-02-04 09:41:25.422779 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-02-04 09:41:25.422784 | orchestrator | Tuesday 04 February 2025 09:40:29 +0000 (0:00:01.065) 0:13:15.102 ****** 2025-02-04 09:41:25.422789 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422794 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422799 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422804 | orchestrator | 2025-02-04 09:41:25.422809 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-02-04 09:41:25.422814 | orchestrator | Tuesday 04 February 2025 09:40:30 +0000 (0:00:00.701) 0:13:15.804 ****** 2025-02-04 09:41:25.422819 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422823 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422831 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.422836 | orchestrator | 2025-02-04 09:41:25.422841 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-02-04 09:41:25.422846 | orchestrator | Tuesday 04 February 2025 09:40:31 +0000 (0:00:01.037) 0:13:16.842 ****** 2025-02-04 09:41:25.422851 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.422856 | orchestrator | 2025-02-04 09:41:25.422861 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-02-04 09:41:25.422866 | orchestrator | Tuesday 04 February 2025 09:40:32 +0000 (0:00:00.738) 0:13:17.580 ****** 2025-02-04 09:41:25.422870 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-02-04 09:41:25.422876 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-02-04 09:41:25.422881 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-02-04 09:41:25.422886 | orchestrator | 2025-02-04 09:41:25.422891 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-02-04 09:41:25.422896 | orchestrator | Tuesday 04 February 2025 09:40:33 +0000 (0:00:01.245) 0:13:18.826 ****** 2025-02-04 09:41:25.422901 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:41:25.422906 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.422911 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-04 09:41:25.422916 | orchestrator | 2025-02-04 09:41:25.422921 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-02-04 09:41:25.422925 | orchestrator | Tuesday 04 February 2025 09:40:35 +0000 (0:00:02.111) 0:13:20.937 ****** 2025-02-04 09:41:25.422930 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-02-04 09:41:25.422935 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-02-04 09:41:25.422940 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.422945 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-02-04 09:41:25.422950 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-02-04 09:41:25.422955 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.422960 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-02-04 09:41:25.422965 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-02-04 09:41:25.422969 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.422974 | orchestrator | 2025-02-04 09:41:25.422979 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-02-04 09:41:25.422984 | orchestrator | Tuesday 04 February 2025 09:40:36 +0000 (0:00:01.431) 0:13:22.369 ****** 2025-02-04 09:41:25.422989 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.422994 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.422999 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.423004 | orchestrator | 2025-02-04 09:41:25.423009 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-02-04 09:41:25.423014 | orchestrator | Tuesday 04 February 2025 09:40:37 +0000 (0:00:00.417) 0:13:22.786 ****** 2025-02-04 09:41:25.423019 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423024 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.423029 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.423034 | orchestrator | 2025-02-04 09:41:25.423038 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-02-04 09:41:25.423043 | orchestrator | Tuesday 04 February 2025 09:40:37 +0000 (0:00:00.500) 0:13:23.286 ****** 2025-02-04 09:41:25.423048 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-02-04 09:41:25.423053 | orchestrator | 2025-02-04 09:41:25.423058 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-02-04 09:41:25.423063 | orchestrator | Tuesday 04 February 2025 09:40:37 +0000 (0:00:00.215) 0:13:23.502 ****** 2025-02-04 09:41:25.423068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423098 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423103 | orchestrator | 2025-02-04 09:41:25.423108 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-02-04 09:41:25.423113 | orchestrator | Tuesday 04 February 2025 09:40:38 +0000 (0:00:00.659) 0:13:24.161 ****** 2025-02-04 09:41:25.423120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423145 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423166 | orchestrator | 2025-02-04 09:41:25.423171 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-02-04 09:41:25.423179 | orchestrator | Tuesday 04 February 2025 09:40:39 +0000 (0:00:00.832) 0:13:24.994 ****** 2025-02-04 09:41:25.423185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-02-04 09:41:25.423209 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423214 | orchestrator | 2025-02-04 09:41:25.423219 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-02-04 09:41:25.423224 | orchestrator | Tuesday 04 February 2025 09:40:40 +0000 (0:00:00.847) 0:13:25.842 ****** 2025-02-04 09:41:25.423229 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-04 09:41:25.423235 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-04 09:41:25.423240 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-04 09:41:25.423245 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-04 09:41:25.423254 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-02-04 09:41:25.423259 | orchestrator | 2025-02-04 09:41:25.423263 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-02-04 09:41:25.423268 | orchestrator | Tuesday 04 February 2025 09:41:04 +0000 (0:00:24.117) 0:13:49.959 ****** 2025-02-04 09:41:25.423273 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423278 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.423283 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.423288 | orchestrator | 2025-02-04 09:41:25.423293 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-02-04 09:41:25.423298 | orchestrator | Tuesday 04 February 2025 09:41:04 +0000 (0:00:00.519) 0:13:50.479 ****** 2025-02-04 09:41:25.423303 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423308 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.423313 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.423318 | orchestrator | 2025-02-04 09:41:25.423323 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-02-04 09:41:25.423328 | orchestrator | Tuesday 04 February 2025 09:41:05 +0000 (0:00:00.442) 0:13:50.921 ****** 2025-02-04 09:41:25.423333 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.423338 | orchestrator | 2025-02-04 09:41:25.423343 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-02-04 09:41:25.423347 | orchestrator | Tuesday 04 February 2025 09:41:06 +0000 (0:00:00.760) 0:13:51.682 ****** 2025-02-04 09:41:25.423354 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.423359 | orchestrator | 2025-02-04 09:41:25.423364 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-02-04 09:41:25.423369 | orchestrator | Tuesday 04 February 2025 09:41:07 +0000 (0:00:00.950) 0:13:52.632 ****** 2025-02-04 09:41:25.423374 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.423379 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.423384 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.423389 | orchestrator | 2025-02-04 09:41:25.423394 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-02-04 09:41:25.423399 | orchestrator | Tuesday 04 February 2025 09:41:08 +0000 (0:00:01.322) 0:13:53.955 ****** 2025-02-04 09:41:25.423404 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.423409 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.423416 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.423421 | orchestrator | 2025-02-04 09:41:25.423426 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-02-04 09:41:25.423431 | orchestrator | Tuesday 04 February 2025 09:41:09 +0000 (0:00:01.206) 0:13:55.162 ****** 2025-02-04 09:41:25.423436 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.423441 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.423446 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.423451 | orchestrator | 2025-02-04 09:41:25.423456 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-02-04 09:41:25.423461 | orchestrator | Tuesday 04 February 2025 09:41:11 +0000 (0:00:02.383) 0:13:57.546 ****** 2025-02-04 09:41:25.423466 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.423471 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.423476 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-02-04 09:41:25.423481 | orchestrator | 2025-02-04 09:41:25.423486 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-02-04 09:41:25.423493 | orchestrator | Tuesday 04 February 2025 09:41:14 +0000 (0:00:02.291) 0:13:59.838 ****** 2025-02-04 09:41:25.423498 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423503 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:41:25.423508 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:41:25.423513 | orchestrator | 2025-02-04 09:41:25.423518 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-02-04 09:41:25.423523 | orchestrator | Tuesday 04 February 2025 09:41:15 +0000 (0:00:01.186) 0:14:01.024 ****** 2025-02-04 09:41:25.423528 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.423533 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.423538 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.423543 | orchestrator | 2025-02-04 09:41:25.423548 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-02-04 09:41:25.423552 | orchestrator | Tuesday 04 February 2025 09:41:16 +0000 (0:00:00.644) 0:14:01.669 ****** 2025-02-04 09:41:25.423558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:41:25.423563 | orchestrator | 2025-02-04 09:41:25.423567 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-02-04 09:41:25.423572 | orchestrator | Tuesday 04 February 2025 09:41:16 +0000 (0:00:00.813) 0:14:02.482 ****** 2025-02-04 09:41:25.423577 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.423582 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.423587 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.423592 | orchestrator | 2025-02-04 09:41:25.423597 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-02-04 09:41:25.423602 | orchestrator | Tuesday 04 February 2025 09:41:17 +0000 (0:00:00.398) 0:14:02.881 ****** 2025-02-04 09:41:25.423607 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.423612 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.423617 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.423622 | orchestrator | 2025-02-04 09:41:25.423627 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-02-04 09:41:25.423632 | orchestrator | Tuesday 04 February 2025 09:41:19 +0000 (0:00:01.757) 0:14:04.638 ****** 2025-02-04 09:41:25.423637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:41:25.423644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:41:25.423649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:41:25.423654 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:41:25.423659 | orchestrator | 2025-02-04 09:41:25.423664 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-02-04 09:41:25.423669 | orchestrator | Tuesday 04 February 2025 09:41:20 +0000 (0:00:00.944) 0:14:05.583 ****** 2025-02-04 09:41:25.423674 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:41:25.423679 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:41:25.423684 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:41:25.423689 | orchestrator | 2025-02-04 09:41:25.423694 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-02-04 09:41:25.423699 | orchestrator | Tuesday 04 February 2025 09:41:20 +0000 (0:00:00.589) 0:14:06.172 ****** 2025-02-04 09:41:25.423704 | orchestrator | changed: [testbed-node-3] 2025-02-04 09:41:25.423709 | orchestrator | changed: [testbed-node-4] 2025-02-04 09:41:25.423714 | orchestrator | changed: [testbed-node-5] 2025-02-04 09:41:25.423719 | orchestrator | 2025-02-04 09:41:25.423724 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:41:25.423729 | orchestrator | testbed-node-0 : ok=120  changed=33  unreachable=0 failed=0 skipped=274  rescued=0 ignored=0 2025-02-04 09:41:25.423737 | orchestrator | testbed-node-1 : ok=116  changed=32  unreachable=0 failed=0 skipped=263  rescued=0 ignored=0 2025-02-04 09:41:28.444654 | orchestrator | testbed-node-2 : ok=123  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-02-04 09:41:28.444742 | orchestrator | testbed-node-3 : ok=184  changed=50  unreachable=0 failed=0 skipped=366  rescued=0 ignored=0 2025-02-04 09:41:28.444752 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=310  rescued=0 ignored=0 2025-02-04 09:41:28.444760 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=308  rescued=0 ignored=0 2025-02-04 09:41:28.444767 | orchestrator | 2025-02-04 09:41:28.444773 | orchestrator | 2025-02-04 09:41:28.444780 | orchestrator | 2025-02-04 09:41:28.444788 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:41:28.444795 | orchestrator | Tuesday 04 February 2025 09:41:21 +0000 (0:00:01.291) 0:14:07.464 ****** 2025-02-04 09:41:28.444802 | orchestrator | =============================================================================== 2025-02-04 09:41:28.444808 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 35.34s 2025-02-04 09:41:28.444815 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.12s 2025-02-04 09:41:28.444821 | orchestrator | ceph-container-common : pulling nexus.testbed.osism.xyz:8193/osism/ceph-daemon:quincy image -- 22.66s 2025-02-04 09:41:28.444829 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.61s 2025-02-04 09:41:28.444835 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 19.84s 2025-02-04 09:41:28.444849 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.63s 2025-02-04 09:41:28.444856 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.19s 2025-02-04 09:41:28.444862 | orchestrator | ceph-config : create ceph initial directories --------------------------- 7.83s 2025-02-04 09:41:28.444868 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.65s 2025-02-04 09:41:28.444874 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.30s 2025-02-04 09:41:28.444880 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.97s 2025-02-04 09:41:28.444886 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.84s 2025-02-04 09:41:28.444892 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 6.46s 2025-02-04 09:41:28.444898 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.40s 2025-02-04 09:41:28.444903 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 5.10s 2025-02-04 09:41:28.444909 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.38s 2025-02-04 09:41:28.444915 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.19s 2025-02-04 09:41:28.444921 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 3.82s 2025-02-04 09:41:28.444927 | orchestrator | ceph-container-common : get ceph version -------------------------------- 3.69s 2025-02-04 09:41:28.444933 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.43s 2025-02-04 09:41:28.444940 | orchestrator | 2025-02-04 09:41:25 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:28.444947 | orchestrator | 2025-02-04 09:41:25 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state STARTED 2025-02-04 09:41:28.444953 | orchestrator | 2025-02-04 09:41:25 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:28.444959 | orchestrator | 2025-02-04 09:41:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:28.444976 | orchestrator | 2025-02-04 09:41:28 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:28.446294 | orchestrator | 2025-02-04 09:41:28 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:28.448073 | orchestrator | 2025-02-04 09:41:28 | INFO  | Task a2947d26-032a-46b2-b0b0-2244a30dbc60 is in state SUCCESS 2025-02-04 09:41:28.449939 | orchestrator | 2025-02-04 09:41:28.449979 | orchestrator | 2025-02-04 09:41:28.449985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:41:28.449993 | orchestrator | 2025-02-04 09:41:28.449999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:41:28.450006 | orchestrator | Tuesday 04 February 2025 09:36:23 +0000 (0:00:00.407) 0:00:00.407 ****** 2025-02-04 09:41:28.450041 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:41:28.450051 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:41:28.450058 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:41:28.450064 | orchestrator | 2025-02-04 09:41:28.450071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:41:28.450077 | orchestrator | Tuesday 04 February 2025 09:36:23 +0000 (0:00:00.457) 0:00:00.864 ****** 2025-02-04 09:41:28.450085 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-02-04 09:41:28.450109 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-02-04 09:41:28.450116 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-02-04 09:41:28.450122 | orchestrator | 2025-02-04 09:41:28.450129 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-02-04 09:41:28.450135 | orchestrator | 2025-02-04 09:41:28.450141 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-04 09:41:28.450169 | orchestrator | Tuesday 04 February 2025 09:36:24 +0000 (0:00:00.556) 0:00:01.420 ****** 2025-02-04 09:41:28.450176 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:28.450183 | orchestrator | 2025-02-04 09:41:28.450189 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-02-04 09:41:28.450195 | orchestrator | Tuesday 04 February 2025 09:36:24 +0000 (0:00:00.669) 0:00:02.090 ****** 2025-02-04 09:41:28.450221 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-04 09:41:28.450228 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-04 09:41:28.450234 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-02-04 09:41:28.450240 | orchestrator | 2025-02-04 09:41:28.450246 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-02-04 09:41:28.450252 | orchestrator | Tuesday 04 February 2025 09:36:27 +0000 (0:00:02.042) 0:00:04.132 ****** 2025-02-04 09:41:28.450261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450369 | orchestrator | 2025-02-04 09:41:28.450376 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-04 09:41:28.450383 | orchestrator | Tuesday 04 February 2025 09:36:28 +0000 (0:00:01.839) 0:00:05.972 ****** 2025-02-04 09:41:28.450389 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:28.450396 | orchestrator | 2025-02-04 09:41:28.450402 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-02-04 09:41:28.450408 | orchestrator | Tuesday 04 February 2025 09:36:29 +0000 (0:00:01.027) 0:00:07.000 ****** 2025-02-04 09:41:28.450422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450550 | orchestrator | 2025-02-04 09:41:28.450556 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-02-04 09:41:28.450563 | orchestrator | Tuesday 04 February 2025 09:36:33 +0000 (0:00:03.365) 0:00:10.366 ****** 2025-02-04 09:41:28.450569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:41:28.450583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:41:28.450589 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:28.450600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:41:28.450607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:41:28.450613 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:28.450619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:41:28.450626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:41:28.450638 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:28.450645 | orchestrator | 2025-02-04 09:41:28.450651 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-02-04 09:41:28.450657 | orchestrator | Tuesday 04 February 2025 09:36:34 +0000 (0:00:00.861) 0:00:11.228 ****** 2025-02-04 09:41:28.450666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:41:28.450673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:41:28.450679 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:28.450685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:41:28.450695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:41:28.450704 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:28.450710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-02-04 09:41:28.450721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-02-04 09:41:28.450728 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:28.450734 | orchestrator | 2025-02-04 09:41:28.450740 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-02-04 09:41:28.450746 | orchestrator | Tuesday 04 February 2025 09:36:35 +0000 (0:00:01.147) 0:00:12.376 ****** 2025-02-04 09:41:28.450752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450812 | orchestrator | 2025-02-04 09:41:28.450818 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-02-04 09:41:28.450824 | orchestrator | Tuesday 04 February 2025 09:36:38 +0000 (0:00:03.026) 0:00:15.402 ****** 2025-02-04 09:41:28.450830 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:28.450836 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:28.450842 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:28.450848 | orchestrator | 2025-02-04 09:41:28.450854 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-02-04 09:41:28.450860 | orchestrator | Tuesday 04 February 2025 09:36:41 +0000 (0:00:03.245) 0:00:18.647 ****** 2025-02-04 09:41:28.450866 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:28.450872 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:28.450878 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:28.450884 | orchestrator | 2025-02-04 09:41:28.450890 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-02-04 09:41:28.450896 | orchestrator | Tuesday 04 February 2025 09:36:43 +0000 (0:00:02.011) 0:00:20.659 ****** 2025-02-04 09:41:28.450906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch:2024.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-02-04 09:41:28.450928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'nexus.testbed.osism.xyz:8193/kolla/opensearch-dashboards:2024.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-02-04 09:41:28.450957 | orchestrator | 2025-02-04 09:41:28.450963 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-04 09:41:28.450969 | orchestrator | Tuesday 04 February 2025 09:36:46 +0000 (0:00:02.562) 0:00:23.221 ****** 2025-02-04 09:41:28.450975 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:28.450981 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:41:28.450987 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:41:28.450993 | orchestrator | 2025-02-04 09:41:28.450999 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-04 09:41:28.451005 | orchestrator | Tuesday 04 February 2025 09:36:46 +0000 (0:00:00.427) 0:00:23.649 ****** 2025-02-04 09:41:28.451011 | orchestrator | 2025-02-04 09:41:28.451017 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-04 09:41:28.451024 | orchestrator | Tuesday 04 February 2025 09:36:46 +0000 (0:00:00.058) 0:00:23.707 ****** 2025-02-04 09:41:28.451029 | orchestrator | 2025-02-04 09:41:28.451038 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-02-04 09:41:28.451044 | orchestrator | Tuesday 04 February 2025 09:36:46 +0000 (0:00:00.062) 0:00:23.770 ****** 2025-02-04 09:41:28.451050 | orchestrator | 2025-02-04 09:41:28.451056 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-02-04 09:41:28.451062 | orchestrator | Tuesday 04 February 2025 09:36:46 +0000 (0:00:00.072) 0:00:23.842 ****** 2025-02-04 09:41:28.451068 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:28.451074 | orchestrator | 2025-02-04 09:41:28.451080 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-02-04 09:41:28.451086 | orchestrator | Tuesday 04 February 2025 09:36:47 +0000 (0:00:00.595) 0:00:24.438 ****** 2025-02-04 09:41:28.451092 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:41:28.451098 | orchestrator | 2025-02-04 09:41:28.451104 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-02-04 09:41:28.451110 | orchestrator | Tuesday 04 February 2025 09:36:47 +0000 (0:00:00.325) 0:00:24.763 ****** 2025-02-04 09:41:28.451116 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:28.451122 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:28.451128 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:28.451134 | orchestrator | 2025-02-04 09:41:28.451140 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-02-04 09:41:28.451189 | orchestrator | Tuesday 04 February 2025 09:37:12 +0000 (0:00:24.723) 0:00:49.487 ****** 2025-02-04 09:41:28.451198 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:41:28.451205 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:41:28.451211 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:41:28.451218 | orchestrator | 2025-02-04 09:41:28.451225 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-02-04 09:41:28.451232 | orchestrator | Tuesday 04 February 2025 09:37:59 +0000 (0:00:47.468) 0:01:36.956 ****** 2025-02-04 09:41:28.451239 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:41:28.451246 | orchestrator | 2025-02-04 09:41:28.451252 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-02-04 09:41:28.451259 | orchestrator | Tuesday 04 February 2025 09:38:01 +0000 (0:00:01.238) 0:01:38.194 ****** 2025-02-04 09:41:28.451265 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (30 retries left). 2025-02-04 09:41:28.451272 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (29 retries left). 2025-02-04 09:41:28.451279 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (28 retries left). 2025-02-04 09:41:28.451286 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (27 retries left). 2025-02-04 09:41:28.451293 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (26 retries left). 2025-02-04 09:41:28.451303 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (25 retries left). 2025-02-04 09:41:28.451310 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (24 retries left). 2025-02-04 09:41:28.451316 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (23 retries left). 2025-02-04 09:41:28.451323 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (22 retries left). 2025-02-04 09:41:28.451334 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (21 retries left). 2025-02-04 09:41:28.451341 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (20 retries left). 2025-02-04 09:41:28.451348 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (19 retries left). 2025-02-04 09:41:28.451355 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (18 retries left). 2025-02-04 09:41:28.451362 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (17 retries left). 2025-02-04 09:41:28.451369 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (16 retries left). 2025-02-04 09:41:28.451375 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (15 retries left). 2025-02-04 09:41:28.451390 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (14 retries left). 2025-02-04 09:41:28.451403 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (13 retries left). 2025-02-04 09:41:28.451410 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (12 retries left). 2025-02-04 09:41:28.451417 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (11 retries left). 2025-02-04 09:41:28.451423 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (10 retries left). 2025-02-04 09:41:28.451430 | orchestrator | 2025-02-04 09:41:28.451437 | orchestrator | STILL ALIVE [task 'opensearch : Wait for OpenSearch to become ready' is running] *** 2025-02-04 09:41:28.451444 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (9 retries left). 2025-02-04 09:41:28.451450 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (8 retries left). 2025-02-04 09:41:28.451456 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (7 retries left). 2025-02-04 09:41:28.451462 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (6 retries left). 2025-02-04 09:41:28.451468 | orchestrator | 2025-02-04 09:41:28.451473 | orchestrator | STILL ALIVE [task 'opensearch : Wait for OpenSearch to become ready' is running] *** 2025-02-04 09:41:28.451482 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (5 retries left). 2025-02-04 09:41:28.451489 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (4 retries left). 2025-02-04 09:41:28.451495 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (3 retries left). 2025-02-04 09:41:28.451501 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (2 retries left). 2025-02-04 09:41:28.451506 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for OpenSearch to become ready (1 retries left). 2025-02-04 09:41:28.451512 | orchestrator | 2025-02-04 09:41:28.451518 | orchestrator | STILL ALIVE [task 'opensearch : Wait for OpenSearch to become ready' is running] *** 2025-02-04 09:41:28.451525 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "uri", "attempts": 30, "changed": false, "elapsed": 1, "msg": "Status code was -1 and not [200]: Request failed: ", "redirected": false, "status": -1, "url": "https://api-int.testbed.osism.xyz:9200/_cluster/stats"} 2025-02-04 09:41:28.451537 | orchestrator | 2025-02-04 09:41:28.451543 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:41:28.451549 | orchestrator | testbed-node-0 : ok=14  changed=9  unreachable=0 failed=1  skipped=5  rescued=0 ignored=0 2025-02-04 09:41:28.451555 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:41:28.451562 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-02-04 09:41:28.451568 | orchestrator | 2025-02-04 09:41:28.451573 | orchestrator | 2025-02-04 09:41:28.451579 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:41:28.451585 | orchestrator | Tuesday 04 February 2025 09:41:26 +0000 (0:03:25.131) 0:05:03.325 ****** 2025-02-04 09:41:28.451591 | orchestrator | =============================================================================== 2025-02-04 09:41:28.451597 | orchestrator | opensearch : Wait for OpenSearch to become ready ---------------------- 205.13s 2025-02-04 09:41:28.451603 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 47.47s 2025-02-04 09:41:28.451609 | orchestrator | opensearch : Restart opensearch container ------------------------------ 24.72s 2025-02-04 09:41:28.451615 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.37s 2025-02-04 09:41:28.451621 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.25s 2025-02-04 09:41:28.451627 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.03s 2025-02-04 09:41:28.451633 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.56s 2025-02-04 09:41:28.451641 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.04s 2025-02-04 09:41:28.452072 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.01s 2025-02-04 09:41:28.452084 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.84s 2025-02-04 09:41:28.452091 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.24s 2025-02-04 09:41:28.452098 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.15s 2025-02-04 09:41:28.452105 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.03s 2025-02-04 09:41:28.452111 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.86s 2025-02-04 09:41:28.452117 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2025-02-04 09:41:28.452124 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.60s 2025-02-04 09:41:28.452130 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-02-04 09:41:28.452136 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-02-04 09:41:28.452143 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.43s 2025-02-04 09:41:28.452163 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.33s 2025-02-04 09:41:28.452170 | orchestrator | 2025-02-04 09:41:28 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:28.452180 | orchestrator | 2025-02-04 09:41:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:31.506195 | orchestrator | 2025-02-04 09:41:31 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:31.507003 | orchestrator | 2025-02-04 09:41:31 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:31.507439 | orchestrator | 2025-02-04 09:41:31 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:34.558832 | orchestrator | 2025-02-04 09:41:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:34.558976 | orchestrator | 2025-02-04 09:41:34 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:34.560977 | orchestrator | 2025-02-04 09:41:34 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:34.567867 | orchestrator | 2025-02-04 09:41:34 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:37.629226 | orchestrator | 2025-02-04 09:41:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:37.629363 | orchestrator | 2025-02-04 09:41:37 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:40.676101 | orchestrator | 2025-02-04 09:41:37 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:40.676311 | orchestrator | 2025-02-04 09:41:37 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:40.676333 | orchestrator | 2025-02-04 09:41:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:40.676370 | orchestrator | 2025-02-04 09:41:40 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:40.676726 | orchestrator | 2025-02-04 09:41:40 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:40.677809 | orchestrator | 2025-02-04 09:41:40 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:43.717889 | orchestrator | 2025-02-04 09:41:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:43.718003 | orchestrator | 2025-02-04 09:41:43 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:43.718701 | orchestrator | 2025-02-04 09:41:43 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:43.718724 | orchestrator | 2025-02-04 09:41:43 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:43.718965 | orchestrator | 2025-02-04 09:41:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:46.758364 | orchestrator | 2025-02-04 09:41:46 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:46.763030 | orchestrator | 2025-02-04 09:41:46 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:46.763094 | orchestrator | 2025-02-04 09:41:46 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:49.819475 | orchestrator | 2025-02-04 09:41:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:49.819610 | orchestrator | 2025-02-04 09:41:49 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:49.822994 | orchestrator | 2025-02-04 09:41:49 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:49.825544 | orchestrator | 2025-02-04 09:41:49 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:52.862419 | orchestrator | 2025-02-04 09:41:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:52.862644 | orchestrator | 2025-02-04 09:41:52 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:52.864880 | orchestrator | 2025-02-04 09:41:52 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:52.864936 | orchestrator | 2025-02-04 09:41:52 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:52.867707 | orchestrator | 2025-02-04 09:41:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:55.914516 | orchestrator | 2025-02-04 09:41:55 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:55.915847 | orchestrator | 2025-02-04 09:41:55 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:55.917189 | orchestrator | 2025-02-04 09:41:55 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:41:58.966590 | orchestrator | 2025-02-04 09:41:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:41:58.966734 | orchestrator | 2025-02-04 09:41:58 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:41:58.967165 | orchestrator | 2025-02-04 09:41:58 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:41:58.971356 | orchestrator | 2025-02-04 09:41:58 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:02.028330 | orchestrator | 2025-02-04 09:41:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:02.028448 | orchestrator | 2025-02-04 09:42:02 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:02.030595 | orchestrator | 2025-02-04 09:42:02 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:02.030645 | orchestrator | 2025-02-04 09:42:02 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:05.083229 | orchestrator | 2025-02-04 09:42:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:05.083567 | orchestrator | 2025-02-04 09:42:05 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:05.084876 | orchestrator | 2025-02-04 09:42:05 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:05.084926 | orchestrator | 2025-02-04 09:42:05 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:08.138197 | orchestrator | 2025-02-04 09:42:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:08.138353 | orchestrator | 2025-02-04 09:42:08 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:08.139046 | orchestrator | 2025-02-04 09:42:08 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:08.139632 | orchestrator | 2025-02-04 09:42:08 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:11.205378 | orchestrator | 2025-02-04 09:42:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:11.205520 | orchestrator | 2025-02-04 09:42:11 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:11.211074 | orchestrator | 2025-02-04 09:42:11 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:11.211192 | orchestrator | 2025-02-04 09:42:11 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:14.261540 | orchestrator | 2025-02-04 09:42:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:14.261682 | orchestrator | 2025-02-04 09:42:14 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:14.262061 | orchestrator | 2025-02-04 09:42:14 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:14.263366 | orchestrator | 2025-02-04 09:42:14 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:17.325446 | orchestrator | 2025-02-04 09:42:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:17.325701 | orchestrator | 2025-02-04 09:42:17 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:17.325768 | orchestrator | 2025-02-04 09:42:17 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:17.326689 | orchestrator | 2025-02-04 09:42:17 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:20.364152 | orchestrator | 2025-02-04 09:42:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:20.364328 | orchestrator | 2025-02-04 09:42:20 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:20.364495 | orchestrator | 2025-02-04 09:42:20 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:20.364527 | orchestrator | 2025-02-04 09:42:20 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:23.408584 | orchestrator | 2025-02-04 09:42:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:23.408736 | orchestrator | 2025-02-04 09:42:23 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:23.409586 | orchestrator | 2025-02-04 09:42:23 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:23.409699 | orchestrator | 2025-02-04 09:42:23 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:26.458855 | orchestrator | 2025-02-04 09:42:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:26.459000 | orchestrator | 2025-02-04 09:42:26 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state STARTED 2025-02-04 09:42:26.460372 | orchestrator | 2025-02-04 09:42:26 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:26.460433 | orchestrator | 2025-02-04 09:42:26 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:26.460781 | orchestrator | 2025-02-04 09:42:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:29.509852 | orchestrator | 2025-02-04 09:42:29 | INFO  | Task f81953cc-8580-436a-9e78-599283781822 is in state STARTED 2025-02-04 09:42:29.510176 | orchestrator | 2025-02-04 09:42:29 | INFO  | Task dceb67c2-663d-4aca-9ad7-8c4c843bf5c5 is in state SUCCESS 2025-02-04 09:42:29.512980 | orchestrator | 2025-02-04 09:42:29.513038 | orchestrator | 2025-02-04 09:42:29.513051 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:42:29.513093 | orchestrator | 2025-02-04 09:42:29.513107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:42:29.513118 | orchestrator | Tuesday 04 February 2025 09:41:23 +0000 (0:00:00.345) 0:00:00.345 ****** 2025-02-04 09:42:29.513129 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.513141 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.513166 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.513177 | orchestrator | 2025-02-04 09:42:29.513188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:42:29.513199 | orchestrator | Tuesday 04 February 2025 09:41:24 +0000 (0:00:00.468) 0:00:00.814 ****** 2025-02-04 09:42:29.513209 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-02-04 09:42:29.513220 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-02-04 09:42:29.513231 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-02-04 09:42:29.513241 | orchestrator | 2025-02-04 09:42:29.513252 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-02-04 09:42:29.513262 | orchestrator | 2025-02-04 09:42:29.513273 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-04 09:42:29.513283 | orchestrator | Tuesday 04 February 2025 09:41:24 +0000 (0:00:00.607) 0:00:01.421 ****** 2025-02-04 09:42:29.513294 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:42:29.513330 | orchestrator | 2025-02-04 09:42:29.513348 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-02-04 09:42:29.513363 | orchestrator | Tuesday 04 February 2025 09:41:25 +0000 (0:00:00.791) 0:00:02.213 ****** 2025-02-04 09:42:29.513382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.513424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.513460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.513477 | orchestrator | 2025-02-04 09:42:29.513494 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-02-04 09:42:29.513511 | orchestrator | Tuesday 04 February 2025 09:41:28 +0000 (0:00:02.396) 0:00:04.609 ****** 2025-02-04 09:42:29.513528 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.513547 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.513563 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.513581 | orchestrator | 2025-02-04 09:42:29.513598 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-04 09:42:29.513616 | orchestrator | Tuesday 04 February 2025 09:41:28 +0000 (0:00:00.297) 0:00:04.906 ****** 2025-02-04 09:42:29.513643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-04 09:42:29.513659 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-02-04 09:42:29.513677 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-02-04 09:42:29.513695 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-04 09:42:29.513712 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-02-04 09:42:29.513739 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-02-04 09:42:29.513756 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-02-04 09:42:29.513773 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-02-04 09:42:29.513795 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-02-04 09:42:29.513812 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-02-04 09:42:29.513829 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-02-04 09:42:29.513845 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-02-04 09:42:29.513862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-02-04 09:42:29.513878 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-02-04 09:42:29.513894 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-02-04 09:42:29.513909 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-02-04 09:42:29.513926 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-02-04 09:42:29.513942 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-02-04 09:42:29.513960 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-02-04 09:42:29.513983 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-02-04 09:42:29.514000 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-02-04 09:42:29.514246 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-02-04 09:42:29.514270 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-02-04 09:42:29.514282 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'ironic', 'enabled': True}) 2025-02-04 09:42:29.514293 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-02-04 09:42:29.514303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-02-04 09:42:29.514313 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-02-04 09:42:29.514324 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-02-04 09:42:29.514334 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-02-04 09:42:29.514344 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-02-04 09:42:29.514355 | orchestrator | 2025-02-04 09:42:29.514365 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.514376 | orchestrator | Tuesday 04 February 2025 09:41:29 +0000 (0:00:01.257) 0:00:06.164 ****** 2025-02-04 09:42:29.514397 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.514408 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.514418 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.514428 | orchestrator | 2025-02-04 09:42:29.514439 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.514452 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.536) 0:00:06.700 ****** 2025-02-04 09:42:29.514467 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.514485 | orchestrator | 2025-02-04 09:42:29.514501 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.514533 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.164) 0:00:06.865 ****** 2025-02-04 09:42:29.514551 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.514567 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.514583 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.514600 | orchestrator | 2025-02-04 09:42:29.514617 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.514634 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.506) 0:00:07.371 ****** 2025-02-04 09:42:29.514650 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.514667 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.514683 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.514700 | orchestrator | 2025-02-04 09:42:29.514716 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.514733 | orchestrator | Tuesday 04 February 2025 09:41:31 +0000 (0:00:00.378) 0:00:07.750 ****** 2025-02-04 09:42:29.514748 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.514762 | orchestrator | 2025-02-04 09:42:29.514778 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.514794 | orchestrator | Tuesday 04 February 2025 09:41:31 +0000 (0:00:00.310) 0:00:08.061 ****** 2025-02-04 09:42:29.514809 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.514823 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.514839 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.514856 | orchestrator | 2025-02-04 09:42:29.514873 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.514889 | orchestrator | Tuesday 04 February 2025 09:41:31 +0000 (0:00:00.326) 0:00:08.387 ****** 2025-02-04 09:42:29.514905 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.514921 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.514937 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.514954 | orchestrator | 2025-02-04 09:42:29.514972 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.514989 | orchestrator | Tuesday 04 February 2025 09:41:32 +0000 (0:00:00.573) 0:00:08.960 ****** 2025-02-04 09:42:29.515007 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515023 | orchestrator | 2025-02-04 09:42:29.515039 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.515057 | orchestrator | Tuesday 04 February 2025 09:41:32 +0000 (0:00:00.139) 0:00:09.099 ****** 2025-02-04 09:42:29.515108 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515126 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.515142 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.515159 | orchestrator | 2025-02-04 09:42:29.515176 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.515193 | orchestrator | Tuesday 04 February 2025 09:41:33 +0000 (0:00:00.511) 0:00:09.611 ****** 2025-02-04 09:42:29.515209 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.515226 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.515243 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.515261 | orchestrator | 2025-02-04 09:42:29.515277 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.515294 | orchestrator | Tuesday 04 February 2025 09:41:33 +0000 (0:00:00.636) 0:00:10.247 ****** 2025-02-04 09:42:29.515311 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515341 | orchestrator | 2025-02-04 09:42:29.515499 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.515521 | orchestrator | Tuesday 04 February 2025 09:41:33 +0000 (0:00:00.142) 0:00:10.390 ****** 2025-02-04 09:42:29.515532 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515542 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.515553 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.515563 | orchestrator | 2025-02-04 09:42:29.515574 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.515584 | orchestrator | Tuesday 04 February 2025 09:41:34 +0000 (0:00:00.531) 0:00:10.921 ****** 2025-02-04 09:42:29.515594 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.515604 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.515615 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.515625 | orchestrator | 2025-02-04 09:42:29.515636 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.515646 | orchestrator | Tuesday 04 February 2025 09:41:34 +0000 (0:00:00.544) 0:00:11.465 ****** 2025-02-04 09:42:29.515656 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515667 | orchestrator | 2025-02-04 09:42:29.515677 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.515687 | orchestrator | Tuesday 04 February 2025 09:41:35 +0000 (0:00:00.134) 0:00:11.599 ****** 2025-02-04 09:42:29.515698 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515708 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.515718 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.515728 | orchestrator | 2025-02-04 09:42:29.515738 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.515749 | orchestrator | Tuesday 04 February 2025 09:41:35 +0000 (0:00:00.747) 0:00:12.347 ****** 2025-02-04 09:42:29.515759 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.515769 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.515779 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.515789 | orchestrator | 2025-02-04 09:42:29.515800 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.515810 | orchestrator | Tuesday 04 February 2025 09:41:36 +0000 (0:00:00.906) 0:00:13.253 ****** 2025-02-04 09:42:29.515820 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515831 | orchestrator | 2025-02-04 09:42:29.515841 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.515851 | orchestrator | Tuesday 04 February 2025 09:41:37 +0000 (0:00:00.617) 0:00:13.870 ****** 2025-02-04 09:42:29.515861 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.515872 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.515882 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.515892 | orchestrator | 2025-02-04 09:42:29.515903 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.515913 | orchestrator | Tuesday 04 February 2025 09:41:37 +0000 (0:00:00.618) 0:00:14.489 ****** 2025-02-04 09:42:29.515923 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.515934 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.515944 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.515955 | orchestrator | 2025-02-04 09:42:29.515976 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.515987 | orchestrator | Tuesday 04 February 2025 09:41:38 +0000 (0:00:00.892) 0:00:15.381 ****** 2025-02-04 09:42:29.515997 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516008 | orchestrator | 2025-02-04 09:42:29.516018 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.516028 | orchestrator | Tuesday 04 February 2025 09:41:39 +0000 (0:00:00.175) 0:00:15.557 ****** 2025-02-04 09:42:29.516038 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516055 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.516126 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.516143 | orchestrator | 2025-02-04 09:42:29.516164 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.516176 | orchestrator | Tuesday 04 February 2025 09:41:39 +0000 (0:00:00.565) 0:00:16.123 ****** 2025-02-04 09:42:29.516188 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.516200 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.516211 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.516222 | orchestrator | 2025-02-04 09:42:29.516235 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.516247 | orchestrator | Tuesday 04 February 2025 09:41:40 +0000 (0:00:00.675) 0:00:16.799 ****** 2025-02-04 09:42:29.516258 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516269 | orchestrator | 2025-02-04 09:42:29.516279 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.516291 | orchestrator | Tuesday 04 February 2025 09:41:40 +0000 (0:00:00.208) 0:00:17.007 ****** 2025-02-04 09:42:29.516308 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516325 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.516341 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.516357 | orchestrator | 2025-02-04 09:42:29.516374 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.516389 | orchestrator | Tuesday 04 February 2025 09:41:40 +0000 (0:00:00.460) 0:00:17.467 ****** 2025-02-04 09:42:29.516405 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.516421 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.516439 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.516456 | orchestrator | 2025-02-04 09:42:29.516473 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.516489 | orchestrator | Tuesday 04 February 2025 09:41:41 +0000 (0:00:00.366) 0:00:17.834 ****** 2025-02-04 09:42:29.516507 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516517 | orchestrator | 2025-02-04 09:42:29.516527 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.516538 | orchestrator | Tuesday 04 February 2025 09:41:41 +0000 (0:00:00.295) 0:00:18.129 ****** 2025-02-04 09:42:29.516548 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516558 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.516569 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.516579 | orchestrator | 2025-02-04 09:42:29.516595 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.516767 | orchestrator | Tuesday 04 February 2025 09:41:41 +0000 (0:00:00.338) 0:00:18.468 ****** 2025-02-04 09:42:29.516783 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.516798 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.516813 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.516827 | orchestrator | 2025-02-04 09:42:29.516843 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.516858 | orchestrator | Tuesday 04 February 2025 09:41:42 +0000 (0:00:00.563) 0:00:19.032 ****** 2025-02-04 09:42:29.516872 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516886 | orchestrator | 2025-02-04 09:42:29.516899 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.516912 | orchestrator | Tuesday 04 February 2025 09:41:42 +0000 (0:00:00.128) 0:00:19.161 ****** 2025-02-04 09:42:29.516925 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.516939 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.516952 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.516967 | orchestrator | 2025-02-04 09:42:29.516981 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.516996 | orchestrator | Tuesday 04 February 2025 09:41:43 +0000 (0:00:00.466) 0:00:19.627 ****** 2025-02-04 09:42:29.517011 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.517025 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.517039 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.517054 | orchestrator | 2025-02-04 09:42:29.517094 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.517124 | orchestrator | Tuesday 04 February 2025 09:41:43 +0000 (0:00:00.465) 0:00:20.093 ****** 2025-02-04 09:42:29.517136 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.517145 | orchestrator | 2025-02-04 09:42:29.517154 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.517163 | orchestrator | Tuesday 04 February 2025 09:41:43 +0000 (0:00:00.142) 0:00:20.235 ****** 2025-02-04 09:42:29.517175 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.517189 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.517202 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.517217 | orchestrator | 2025-02-04 09:42:29.517230 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-02-04 09:42:29.517242 | orchestrator | Tuesday 04 February 2025 09:41:44 +0000 (0:00:00.973) 0:00:21.208 ****** 2025-02-04 09:42:29.517255 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:29.517269 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:29.517283 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:29.517297 | orchestrator | 2025-02-04 09:42:29.517310 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-02-04 09:42:29.517323 | orchestrator | Tuesday 04 February 2025 09:41:45 +0000 (0:00:01.060) 0:00:22.268 ****** 2025-02-04 09:42:29.517337 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.517353 | orchestrator | 2025-02-04 09:42:29.517367 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-02-04 09:42:29.517382 | orchestrator | Tuesday 04 February 2025 09:41:45 +0000 (0:00:00.131) 0:00:22.399 ****** 2025-02-04 09:42:29.517397 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.517422 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.517432 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.517441 | orchestrator | 2025-02-04 09:42:29.517451 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-02-04 09:42:29.517467 | orchestrator | Tuesday 04 February 2025 09:41:46 +0000 (0:00:00.371) 0:00:22.770 ****** 2025-02-04 09:42:29.517481 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:42:29.517496 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:42:29.517510 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:42:29.517525 | orchestrator | 2025-02-04 09:42:29.517538 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-02-04 09:42:29.517553 | orchestrator | Tuesday 04 February 2025 09:41:49 +0000 (0:00:03.495) 0:00:26.266 ****** 2025-02-04 09:42:29.517568 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-04 09:42:29.517583 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-04 09:42:29.517597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-02-04 09:42:29.517613 | orchestrator | 2025-02-04 09:42:29.517628 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-02-04 09:42:29.517651 | orchestrator | Tuesday 04 February 2025 09:41:53 +0000 (0:00:03.514) 0:00:29.780 ****** 2025-02-04 09:42:29.517665 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-04 09:42:29.517681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-04 09:42:29.517694 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-02-04 09:42:29.517708 | orchestrator | 2025-02-04 09:42:29.517720 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-02-04 09:42:29.517734 | orchestrator | Tuesday 04 February 2025 09:41:56 +0000 (0:00:03.513) 0:00:33.294 ****** 2025-02-04 09:42:29.517747 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-04 09:42:29.517760 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-04 09:42:29.517784 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-02-04 09:42:29.517797 | orchestrator | 2025-02-04 09:42:29.517811 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-02-04 09:42:29.517825 | orchestrator | Tuesday 04 February 2025 09:42:00 +0000 (0:00:03.947) 0:00:37.241 ****** 2025-02-04 09:42:29.517838 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.517852 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.517866 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.517880 | orchestrator | 2025-02-04 09:42:29.517894 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-02-04 09:42:29.517909 | orchestrator | Tuesday 04 February 2025 09:42:01 +0000 (0:00:00.600) 0:00:37.842 ****** 2025-02-04 09:42:29.517923 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.517937 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.517952 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.517966 | orchestrator | 2025-02-04 09:42:29.517979 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-04 09:42:29.517993 | orchestrator | Tuesday 04 February 2025 09:42:01 +0000 (0:00:00.622) 0:00:38.464 ****** 2025-02-04 09:42:29.518007 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:42:29.518089 | orchestrator | 2025-02-04 09:42:29.518116 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-02-04 09:42:29.518130 | orchestrator | Tuesday 04 February 2025 09:42:03 +0000 (0:00:01.242) 0:00:39.706 ****** 2025-02-04 09:42:29.518162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.518183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.518219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.518244 | orchestrator | 2025-02-04 09:42:29.518259 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-02-04 09:42:29.518274 | orchestrator | Tuesday 04 February 2025 09:42:05 +0000 (0:00:02.559) 0:00:42.265 ****** 2025-02-04 09:42:29.518289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:42:29.518305 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.518329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:42:29.518354 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.518369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:42:29.518384 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.518399 | orchestrator | 2025-02-04 09:42:29.518413 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-02-04 09:42:29.518427 | orchestrator | Tuesday 04 February 2025 09:42:08 +0000 (0:00:02.484) 0:00:44.750 ****** 2025-02-04 09:42:29.518451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:42:29.518475 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.518498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:42:29.518514 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.518529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-02-04 09:42:29.518552 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.518576 | orchestrator | 2025-02-04 09:42:29.518593 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-02-04 09:42:29.518608 | orchestrator | Tuesday 04 February 2025 09:42:11 +0000 (0:00:02.864) 0:00:47.614 ****** 2025-02-04 09:42:29.518633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.518661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.518687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/horizon:2024.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'yes', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-02-04 09:42:29.518711 | orchestrator | 2025-02-04 09:42:29.518727 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-04 09:42:29.518743 | orchestrator | Tuesday 04 February 2025 09:42:19 +0000 (0:00:08.620) 0:00:56.235 ****** 2025-02-04 09:42:29.518759 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:29.518775 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:29.518792 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:29.518807 | orchestrator | 2025-02-04 09:42:29.518822 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-02-04 09:42:29.518838 | orchestrator | Tuesday 04 February 2025 09:42:20 +0000 (0:00:00.425) 0:00:56.661 ****** 2025-02-04 09:42:29.518854 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:42:29.518869 | orchestrator | 2025-02-04 09:42:29.518884 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-02-04 09:42:29.518900 | orchestrator | Tuesday 04 February 2025 09:42:20 +0000 (0:00:00.596) 0:00:57.257 ****** 2025-02-04 09:42:29.518917 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "mysql_db", "changed": false, "msg": "unable to find /var/lib/ansible/.my.cnf. Exception message: (2003, \"Can't connect to MySQL server on 'api-int.testbed.osism.xyz' ([Errno 113] No route to host)\")"} 2025-02-04 09:42:29.518934 | orchestrator | 2025-02-04 09:42:29.518950 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:42:29.518965 | orchestrator | testbed-node-0 : ok=37  changed=7  unreachable=0 failed=1  skipped=29  rescued=0 ignored=0 2025-02-04 09:42:29.518982 | orchestrator | testbed-node-1 : ok=37  changed=7  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-04 09:42:29.518997 | orchestrator | testbed-node-2 : ok=37  changed=7  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-02-04 09:42:29.519013 | orchestrator | 2025-02-04 09:42:29.519029 | orchestrator | 2025-02-04 09:42:29.519044 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:42:29.519059 | orchestrator | Tuesday 04 February 2025 09:42:26 +0000 (0:00:05.422) 0:01:02.679 ****** 2025-02-04 09:42:29.519093 | orchestrator | =============================================================================== 2025-02-04 09:42:29.519108 | orchestrator | horizon : Deploy horizon container -------------------------------------- 8.62s 2025-02-04 09:42:29.519122 | orchestrator | horizon : Creating Horizon database ------------------------------------- 5.42s 2025-02-04 09:42:29.519137 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 3.95s 2025-02-04 09:42:29.519151 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.51s 2025-02-04 09:42:29.519165 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.51s 2025-02-04 09:42:29.519179 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.50s 2025-02-04 09:42:29.519194 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 2.86s 2025-02-04 09:42:29.519208 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.56s 2025-02-04 09:42:29.519222 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 2.48s 2025-02-04 09:42:29.519244 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 2.40s 2025-02-04 09:42:29.519259 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.26s 2025-02-04 09:42:29.519273 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.24s 2025-02-04 09:42:29.519288 | orchestrator | horizon : Update policy file name --------------------------------------- 1.06s 2025-02-04 09:42:29.519308 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.97s 2025-02-04 09:42:29.519323 | orchestrator | horizon : Update policy file name --------------------------------------- 0.91s 2025-02-04 09:42:29.519337 | orchestrator | horizon : Update policy file name --------------------------------------- 0.89s 2025-02-04 09:42:29.519351 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-02-04 09:42:29.519365 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.75s 2025-02-04 09:42:29.519387 | orchestrator | horizon : Update policy file name --------------------------------------- 0.68s 2025-02-04 09:42:32.556813 | orchestrator | horizon : Update policy file name --------------------------------------- 0.64s 2025-02-04 09:42:32.556947 | orchestrator | 2025-02-04 09:42:29 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:32.556968 | orchestrator | 2025-02-04 09:42:29 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:32.556983 | orchestrator | 2025-02-04 09:42:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:32.557016 | orchestrator | 2025-02-04 09:42:32 | INFO  | Task f81953cc-8580-436a-9e78-599283781822 is in state STARTED 2025-02-04 09:42:32.557251 | orchestrator | 2025-02-04 09:42:32 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:32.558270 | orchestrator | 2025-02-04 09:42:32 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:32.558384 | orchestrator | 2025-02-04 09:42:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:35.610539 | orchestrator | 2025-02-04 09:42:35 | INFO  | Task f81953cc-8580-436a-9e78-599283781822 is in state STARTED 2025-02-04 09:42:35.611338 | orchestrator | 2025-02-04 09:42:35 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:35.612014 | orchestrator | 2025-02-04 09:42:35 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state STARTED 2025-02-04 09:42:38.657566 | orchestrator | 2025-02-04 09:42:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:38.657737 | orchestrator | 2025-02-04 09:42:38 | INFO  | Task f81953cc-8580-436a-9e78-599283781822 is in state STARTED 2025-02-04 09:42:38.662011 | orchestrator | 2025-02-04 09:42:38.662178 | orchestrator | 2025-02-04 09:42:38.662200 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:42:38.662214 | orchestrator | 2025-02-04 09:42:38.662228 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:42:38.662240 | orchestrator | Tuesday 04 February 2025 09:41:24 +0000 (0:00:00.407) 0:00:00.407 ****** 2025-02-04 09:42:38.662253 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:38.662267 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:38.662280 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:38.662293 | orchestrator | 2025-02-04 09:42:38.662306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:42:38.662318 | orchestrator | Tuesday 04 February 2025 09:41:24 +0000 (0:00:00.486) 0:00:00.893 ****** 2025-02-04 09:42:38.662331 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-02-04 09:42:38.662344 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-02-04 09:42:38.662357 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-02-04 09:42:38.662373 | orchestrator | 2025-02-04 09:42:38.662395 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-02-04 09:42:38.662437 | orchestrator | 2025-02-04 09:42:38.662451 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-04 09:42:38.662463 | orchestrator | Tuesday 04 February 2025 09:41:25 +0000 (0:00:00.531) 0:00:01.424 ****** 2025-02-04 09:42:38.662481 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:42:38.662504 | orchestrator | 2025-02-04 09:42:38.662527 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-02-04 09:42:38.662548 | orchestrator | Tuesday 04 February 2025 09:41:26 +0000 (0:00:01.196) 0:00:02.620 ****** 2025-02-04 09:42:38.662576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.662604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.662690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.662722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.662762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.662787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.662809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.662833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.662856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.662877 | orchestrator | 2025-02-04 09:42:38.662898 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-02-04 09:42:38.662921 | orchestrator | Tuesday 04 February 2025 09:41:29 +0000 (0:00:02.863) 0:00:05.484 ****** 2025-02-04 09:42:38.662953 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-02-04 09:42:38.662968 | orchestrator | 2025-02-04 09:42:38.662990 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-02-04 09:42:38.663002 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.671) 0:00:06.156 ****** 2025-02-04 09:42:38.663015 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:38.663028 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:38.663041 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:38.663089 | orchestrator | 2025-02-04 09:42:38.663113 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-02-04 09:42:38.663134 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.545) 0:00:06.702 ****** 2025-02-04 09:42:38.663148 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:42:38.663165 | orchestrator | 2025-02-04 09:42:38.663187 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-04 09:42:38.663208 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.438) 0:00:07.140 ****** 2025-02-04 09:42:38.663228 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:42:38.663250 | orchestrator | 2025-02-04 09:42:38.663272 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-02-04 09:42:38.663294 | orchestrator | Tuesday 04 February 2025 09:41:32 +0000 (0:00:01.051) 0:00:08.191 ****** 2025-02-04 09:42:38.663309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.663324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.663339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.663387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.663410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.663433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.663456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.663478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.663492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.663511 | orchestrator | 2025-02-04 09:42:38.663524 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-02-04 09:42:38.663537 | orchestrator | Tuesday 04 February 2025 09:41:35 +0000 (0:00:03.642) 0:00:11.834 ****** 2025-02-04 09:42:38.663558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:42:38.663573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.663587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:42:38.663600 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.663620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:42:38.663642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.663695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:42:38.663738 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.663764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:42:38.663787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.663801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:42:38.663816 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.663838 | orchestrator | 2025-02-04 09:42:38.663859 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-02-04 09:42:38.663880 | orchestrator | Tuesday 04 February 2025 09:41:38 +0000 (0:00:02.406) 0:00:14.240 ****** 2025-02-04 09:42:38.663902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:42:38.663951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.663974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:42:38.663987 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.664001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:42:38.664014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.664027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:42:38.664046 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.664116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-02-04 09:42:38.664132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.664146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-02-04 09:42:38.664159 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.664172 | orchestrator | 2025-02-04 09:42:38.664185 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-02-04 09:42:38.664204 | orchestrator | Tuesday 04 February 2025 09:41:39 +0000 (0:00:01.383) 0:00:15.624 ****** 2025-02-04 09:42:38.664217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'inte2025-02-04 09:42:38 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:38.664271 | orchestrator | 2025-02-04 09:42:38 | INFO  | Task 966254e6-54a6-4070-9ba7-b76f5e559021 is in state SUCCESS 2025-02-04 09:42:38.664287 | orchestrator | rval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664425 | orchestrator | 2025-02-04 09:42:38.664435 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-02-04 09:42:38.664446 | orchestrator | Tuesday 04 February 2025 09:41:43 +0000 (0:00:03.851) 0:00:19.475 ****** 2025-02-04 09:42:38.664457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.664502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.664546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.664586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.664636 | orchestrator | 2025-02-04 09:42:38.664646 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-02-04 09:42:38.664657 | orchestrator | Tuesday 04 February 2025 09:41:51 +0000 (0:00:08.650) 0:00:28.126 ****** 2025-02-04 09:42:38.664667 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:42:38.664678 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:42:38.664688 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:42:38.664698 | orchestrator | 2025-02-04 09:42:38.664709 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-02-04 09:42:38.664719 | orchestrator | Tuesday 04 February 2025 09:41:54 +0000 (0:00:02.869) 0:00:30.995 ****** 2025-02-04 09:42:38.664732 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.664750 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.664767 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.664784 | orchestrator | 2025-02-04 09:42:38.664802 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-02-04 09:42:38.664819 | orchestrator | Tuesday 04 February 2025 09:41:56 +0000 (0:00:01.622) 0:00:32.618 ****** 2025-02-04 09:42:38.664837 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.664855 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.664869 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.664879 | orchestrator | 2025-02-04 09:42:38.664895 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-02-04 09:42:38.664905 | orchestrator | Tuesday 04 February 2025 09:41:57 +0000 (0:00:00.559) 0:00:33.177 ****** 2025-02-04 09:42:38.664916 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.664926 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.664936 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.664947 | orchestrator | 2025-02-04 09:42:38.664957 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-02-04 09:42:38.664967 | orchestrator | Tuesday 04 February 2025 09:41:57 +0000 (0:00:00.772) 0:00:33.949 ****** 2025-02-04 09:42:38.664978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.664996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.665008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.665019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.665035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.665046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-02-04 09:42:38.665082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.665094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.665105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.665116 | orchestrator | 2025-02-04 09:42:38.665127 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-04 09:42:38.665137 | orchestrator | Tuesday 04 February 2025 09:42:02 +0000 (0:00:04.331) 0:00:38.280 ****** 2025-02-04 09:42:38.665148 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.665158 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.665168 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.665179 | orchestrator | 2025-02-04 09:42:38.665189 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-02-04 09:42:38.665200 | orchestrator | Tuesday 04 February 2025 09:42:02 +0000 (0:00:00.357) 0:00:38.638 ****** 2025-02-04 09:42:38.665210 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-04 09:42:38.665221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-04 09:42:38.665231 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-02-04 09:42:38.665241 | orchestrator | 2025-02-04 09:42:38.665252 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-02-04 09:42:38.665262 | orchestrator | Tuesday 04 February 2025 09:42:05 +0000 (0:00:03.118) 0:00:41.756 ****** 2025-02-04 09:42:38.665272 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:42:38.665282 | orchestrator | 2025-02-04 09:42:38.665297 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-02-04 09:42:38.665308 | orchestrator | Tuesday 04 February 2025 09:42:06 +0000 (0:00:01.221) 0:00:42.978 ****** 2025-02-04 09:42:38.665318 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.665328 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.665343 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.665354 | orchestrator | 2025-02-04 09:42:38.665364 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-02-04 09:42:38.665374 | orchestrator | Tuesday 04 February 2025 09:42:09 +0000 (0:00:02.507) 0:00:45.485 ****** 2025-02-04 09:42:38.665385 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-02-04 09:42:38.665395 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:42:38.665405 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-02-04 09:42:38.665415 | orchestrator | 2025-02-04 09:42:38.665426 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-02-04 09:42:38.665436 | orchestrator | Tuesday 04 February 2025 09:42:11 +0000 (0:00:02.093) 0:00:47.579 ****** 2025-02-04 09:42:38.665446 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:42:38.665457 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:42:38.665467 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:42:38.665477 | orchestrator | 2025-02-04 09:42:38.665488 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-02-04 09:42:38.665498 | orchestrator | Tuesday 04 February 2025 09:42:12 +0000 (0:00:01.410) 0:00:48.989 ****** 2025-02-04 09:42:38.665508 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-04 09:42:38.665518 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-04 09:42:38.665529 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-02-04 09:42:38.665539 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-04 09:42:38.665550 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-04 09:42:38.665560 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-02-04 09:42:38.665570 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-04 09:42:38.665587 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-04 09:42:38.665605 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-02-04 09:42:38.665621 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-04 09:42:38.665640 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-04 09:42:38.665657 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-02-04 09:42:38.665668 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-04 09:42:38.665678 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-04 09:42:38.665689 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-04 09:42:38.665699 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-02-04 09:42:38.665709 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-04 09:42:38.665724 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-04 09:42:38.665735 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-02-04 09:42:38.665745 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-04 09:42:38.665756 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-02-04 09:42:38.665766 | orchestrator | 2025-02-04 09:42:38.665777 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-02-04 09:42:38.665800 | orchestrator | Tuesday 04 February 2025 09:42:26 +0000 (0:00:13.945) 0:01:02.935 ****** 2025-02-04 09:42:38.665818 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-04 09:42:38.665834 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-04 09:42:38.665851 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-02-04 09:42:38.665868 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-04 09:42:38.665886 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-04 09:42:38.665903 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-02-04 09:42:38.665918 | orchestrator | 2025-02-04 09:42:38.665935 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-02-04 09:42:38.665953 | orchestrator | Tuesday 04 February 2025 09:42:30 +0000 (0:00:03.696) 0:01:06.631 ****** 2025-02-04 09:42:38.665982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.666004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.666073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone:2024.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-02-04 09:42:38.666110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.666140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.666159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-ssh:2024.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-02-04 09:42:38.666178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.666197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.666215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/keystone-fernet:2024.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-02-04 09:42:38.666233 | orchestrator | 2025-02-04 09:42:38.666252 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-02-04 09:42:38.666277 | orchestrator | Tuesday 04 February 2025 09:42:33 +0000 (0:00:03.248) 0:01:09.879 ****** 2025-02-04 09:42:38.666288 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:42:38.666299 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:42:38.666309 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:42:38.666320 | orchestrator | 2025-02-04 09:42:38.666330 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-02-04 09:42:38.666340 | orchestrator | Tuesday 04 February 2025 09:42:34 +0000 (0:00:00.497) 0:01:10.377 ****** 2025-02-04 09:42:38.666351 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "mysql_db", "changed": false, "msg": "unable to find /var/lib/ansible/.my.cnf. Exception message: (2003, \"Can't connect to MySQL server on 'api-int.testbed.osism.xyz' ([Errno 113] No route to host)\")"} 2025-02-04 09:42:38.666362 | orchestrator | 2025-02-04 09:42:38.666373 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:42:38.666383 | orchestrator | testbed-node-0 : ok=20  changed=10  unreachable=0 failed=1  skipped=8  rescued=0 ignored=0 2025-02-04 09:42:38.666396 | orchestrator | testbed-node-1 : ok=17  changed=10  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-04 09:42:38.666415 | orchestrator | testbed-node-2 : ok=17  changed=10  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-02-04 09:42:38.666432 | orchestrator | 2025-02-04 09:42:38.666449 | orchestrator | 2025-02-04 09:42:38.666466 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:42:38.666484 | orchestrator | Tuesday 04 February 2025 09:42:38 +0000 (0:00:03.777) 0:01:14.154 ****** 2025-02-04 09:42:38.666501 | orchestrator | =============================================================================== 2025-02-04 09:42:38.666527 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 13.95s 2025-02-04 09:42:41.702684 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 8.65s 2025-02-04 09:42:41.702821 | orchestrator | keystone : Copying over existing policy file ---------------------------- 4.33s 2025-02-04 09:42:41.702843 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.85s 2025-02-04 09:42:41.702859 | orchestrator | keystone : Creating keystone database ----------------------------------- 3.78s 2025-02-04 09:42:41.702874 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.70s 2025-02-04 09:42:41.703021 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.64s 2025-02-04 09:42:41.703042 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.25s 2025-02-04 09:42:41.703088 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 3.12s 2025-02-04 09:42:41.703104 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.87s 2025-02-04 09:42:41.703119 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.86s 2025-02-04 09:42:41.703134 | orchestrator | keystone : Copying over keystone-paste.ini ------------------------------ 2.51s 2025-02-04 09:42:41.703148 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS certificate --- 2.41s 2025-02-04 09:42:41.703184 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 2.09s 2025-02-04 09:42:41.703199 | orchestrator | keystone : Create Keystone domain-specific config directory ------------- 1.62s 2025-02-04 09:42:41.703213 | orchestrator | keystone : Set fact with the generated cron jobs for building the crontab later --- 1.41s 2025-02-04 09:42:41.703229 | orchestrator | service-cert-copy : keystone | Copying over backend internal TLS key ---- 1.38s 2025-02-04 09:42:41.703252 | orchestrator | keystone : Checking whether keystone-paste.ini file exists -------------- 1.22s 2025-02-04 09:42:41.703276 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.20s 2025-02-04 09:42:41.703432 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.05s 2025-02-04 09:42:41.703455 | orchestrator | 2025-02-04 09:42:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:41.703487 | orchestrator | 2025-02-04 09:42:41 | INFO  | Task f81953cc-8580-436a-9e78-599283781822 is in state STARTED 2025-02-04 09:42:41.704492 | orchestrator | 2025-02-04 09:42:41 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:42:41.704519 | orchestrator | 2025-02-04 09:42:41 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:41.704541 | orchestrator | 2025-02-04 09:42:41 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:42:41.705280 | orchestrator | 2025-02-04 09:42:41 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:42:41.706456 | orchestrator | 2025-02-04 09:42:41 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:42:44.754290 | orchestrator | 2025-02-04 09:42:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:44.754471 | orchestrator | 2025-02-04 09:42:44 | INFO  | Task f81953cc-8580-436a-9e78-599283781822 is in state SUCCESS 2025-02-04 09:42:44.755355 | orchestrator | 2025-02-04 09:42:44 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:42:44.755426 | orchestrator | 2025-02-04 09:42:44 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:44.756312 | orchestrator | 2025-02-04 09:42:44 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:42:44.757562 | orchestrator | 2025-02-04 09:42:44 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:42:44.758922 | orchestrator | 2025-02-04 09:42:44 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:42:47.799656 | orchestrator | 2025-02-04 09:42:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:47.799805 | orchestrator | 2025-02-04 09:42:47 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:42:47.800677 | orchestrator | 2025-02-04 09:42:47 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:47.802470 | orchestrator | 2025-02-04 09:42:47 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:42:47.804205 | orchestrator | 2025-02-04 09:42:47 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:42:47.805696 | orchestrator | 2025-02-04 09:42:47 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:42:50.849101 | orchestrator | 2025-02-04 09:42:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:50.849200 | orchestrator | 2025-02-04 09:42:50 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:42:50.851294 | orchestrator | 2025-02-04 09:42:50 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:50.854757 | orchestrator | 2025-02-04 09:42:50 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:42:50.855719 | orchestrator | 2025-02-04 09:42:50 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:42:50.855739 | orchestrator | 2025-02-04 09:42:50 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:42:53.905995 | orchestrator | 2025-02-04 09:42:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:53.906181 | orchestrator | 2025-02-04 09:42:53 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:42:53.907313 | orchestrator | 2025-02-04 09:42:53 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:53.908401 | orchestrator | 2025-02-04 09:42:53 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:42:53.910140 | orchestrator | 2025-02-04 09:42:53 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:42:53.912139 | orchestrator | 2025-02-04 09:42:53 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:42:56.949518 | orchestrator | 2025-02-04 09:42:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:42:56.950249 | orchestrator | 2025-02-04 09:42:56 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:42:56.950544 | orchestrator | 2025-02-04 09:42:56 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:42:56.950565 | orchestrator | 2025-02-04 09:42:56 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:42:56.951041 | orchestrator | 2025-02-04 09:42:56 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:42:56.951814 | orchestrator | 2025-02-04 09:42:56 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:00.011219 | orchestrator | 2025-02-04 09:42:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:00.011367 | orchestrator | 2025-02-04 09:43:00 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:00.013361 | orchestrator | 2025-02-04 09:43:00 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:00.015298 | orchestrator | 2025-02-04 09:43:00 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:00.018174 | orchestrator | 2025-02-04 09:43:00 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:00.019717 | orchestrator | 2025-02-04 09:43:00 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:03.070524 | orchestrator | 2025-02-04 09:43:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:03.070672 | orchestrator | 2025-02-04 09:43:03 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:03.071795 | orchestrator | 2025-02-04 09:43:03 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:03.073572 | orchestrator | 2025-02-04 09:43:03 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:03.074956 | orchestrator | 2025-02-04 09:43:03 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:03.076376 | orchestrator | 2025-02-04 09:43:03 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:06.117584 | orchestrator | 2025-02-04 09:43:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:06.117696 | orchestrator | 2025-02-04 09:43:06 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:06.118572 | orchestrator | 2025-02-04 09:43:06 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:06.120644 | orchestrator | 2025-02-04 09:43:06 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:06.122171 | orchestrator | 2025-02-04 09:43:06 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:06.123913 | orchestrator | 2025-02-04 09:43:06 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:09.156968 | orchestrator | 2025-02-04 09:43:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:09.157184 | orchestrator | 2025-02-04 09:43:09 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:09.158010 | orchestrator | 2025-02-04 09:43:09 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:09.158082 | orchestrator | 2025-02-04 09:43:09 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:09.159040 | orchestrator | 2025-02-04 09:43:09 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:09.159976 | orchestrator | 2025-02-04 09:43:09 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:09.160096 | orchestrator | 2025-02-04 09:43:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:12.191236 | orchestrator | 2025-02-04 09:43:12 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:12.191836 | orchestrator | 2025-02-04 09:43:12 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:12.191893 | orchestrator | 2025-02-04 09:43:12 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:12.191933 | orchestrator | 2025-02-04 09:43:12 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:12.192627 | orchestrator | 2025-02-04 09:43:12 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:15.227571 | orchestrator | 2025-02-04 09:43:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:15.227684 | orchestrator | 2025-02-04 09:43:15 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:15.228710 | orchestrator | 2025-02-04 09:43:15 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:15.230408 | orchestrator | 2025-02-04 09:43:15 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:15.232564 | orchestrator | 2025-02-04 09:43:15 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:15.236607 | orchestrator | 2025-02-04 09:43:15 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:18.287751 | orchestrator | 2025-02-04 09:43:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:18.287882 | orchestrator | 2025-02-04 09:43:18 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:18.290800 | orchestrator | 2025-02-04 09:43:18 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:18.293538 | orchestrator | 2025-02-04 09:43:18 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:18.295340 | orchestrator | 2025-02-04 09:43:18 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:18.298481 | orchestrator | 2025-02-04 09:43:18 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:21.342728 | orchestrator | 2025-02-04 09:43:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:21.342907 | orchestrator | 2025-02-04 09:43:21 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:21.345226 | orchestrator | 2025-02-04 09:43:21 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:21.349189 | orchestrator | 2025-02-04 09:43:21 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:21.353315 | orchestrator | 2025-02-04 09:43:21 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:21.354336 | orchestrator | 2025-02-04 09:43:21 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:24.406913 | orchestrator | 2025-02-04 09:43:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:24.407168 | orchestrator | 2025-02-04 09:43:24 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:24.407720 | orchestrator | 2025-02-04 09:43:24 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:24.408526 | orchestrator | 2025-02-04 09:43:24 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:24.409529 | orchestrator | 2025-02-04 09:43:24 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:24.410141 | orchestrator | 2025-02-04 09:43:24 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:27.445522 | orchestrator | 2025-02-04 09:43:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:27.445734 | orchestrator | 2025-02-04 09:43:27 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:27.447591 | orchestrator | 2025-02-04 09:43:27 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:27.447624 | orchestrator | 2025-02-04 09:43:27 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:27.448845 | orchestrator | 2025-02-04 09:43:27 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:27.449948 | orchestrator | 2025-02-04 09:43:27 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:30.493310 | orchestrator | 2025-02-04 09:43:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:30.493415 | orchestrator | 2025-02-04 09:43:30 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:30.494581 | orchestrator | 2025-02-04 09:43:30 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:30.496534 | orchestrator | 2025-02-04 09:43:30 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:30.498914 | orchestrator | 2025-02-04 09:43:30 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:30.500728 | orchestrator | 2025-02-04 09:43:30 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:30.501328 | orchestrator | 2025-02-04 09:43:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:33.535955 | orchestrator | 2025-02-04 09:43:33 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:33.536630 | orchestrator | 2025-02-04 09:43:33 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:33.538106 | orchestrator | 2025-02-04 09:43:33 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:33.539360 | orchestrator | 2025-02-04 09:43:33 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:33.539749 | orchestrator | 2025-02-04 09:43:33 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:33.539869 | orchestrator | 2025-02-04 09:43:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:36.580707 | orchestrator | 2025-02-04 09:43:36 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:36.581616 | orchestrator | 2025-02-04 09:43:36 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:36.581680 | orchestrator | 2025-02-04 09:43:36 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:36.582117 | orchestrator | 2025-02-04 09:43:36 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:36.582936 | orchestrator | 2025-02-04 09:43:36 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:39.620846 | orchestrator | 2025-02-04 09:43:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:39.621099 | orchestrator | 2025-02-04 09:43:39 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:39.621638 | orchestrator | 2025-02-04 09:43:39 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:39.621697 | orchestrator | 2025-02-04 09:43:39 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:39.622441 | orchestrator | 2025-02-04 09:43:39 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:39.623397 | orchestrator | 2025-02-04 09:43:39 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:42.664355 | orchestrator | 2025-02-04 09:43:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:42.664501 | orchestrator | 2025-02-04 09:43:42 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:42.665661 | orchestrator | 2025-02-04 09:43:42 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:42.667816 | orchestrator | 2025-02-04 09:43:42 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:42.668922 | orchestrator | 2025-02-04 09:43:42 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:42.670600 | orchestrator | 2025-02-04 09:43:42 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:45.707868 | orchestrator | 2025-02-04 09:43:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:45.708045 | orchestrator | 2025-02-04 09:43:45 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:45.708439 | orchestrator | 2025-02-04 09:43:45 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state STARTED 2025-02-04 09:43:45.709852 | orchestrator | 2025-02-04 09:43:45 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:45.711106 | orchestrator | 2025-02-04 09:43:45 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:45.712196 | orchestrator | 2025-02-04 09:43:45 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:48.756892 | orchestrator | 2025-02-04 09:43:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:48.757151 | orchestrator | 2025-02-04 09:43:48 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:48.758277 | orchestrator | 2025-02-04 09:43:48 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:43:48.758337 | orchestrator | 2025-02-04 09:43:48 | INFO  | Task ab9504f0-4a87-4925-8edb-f3e39a618573 is in state SUCCESS 2025-02-04 09:43:48.760411 | orchestrator | 2025-02-04 09:43:48.760460 | orchestrator | None 2025-02-04 09:43:48.760475 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-04 09:43:48.760490 | orchestrator | 2025-02-04 09:43:48.760505 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-02-04 09:43:48.760546 | orchestrator | 2025-02-04 09:43:48.760561 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-04 09:43:48.760582 | orchestrator | Tuesday 04 February 2025 09:41:27 +0000 (0:00:01.945) 0:00:01.945 ****** 2025-02-04 09:43:48.760716 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:43:48.760752 | orchestrator | 2025-02-04 09:43:48.760777 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-04 09:43:48.760801 | orchestrator | Tuesday 04 February 2025 09:41:28 +0000 (0:00:00.634) 0:00:02.579 ****** 2025-02-04 09:43:48.760826 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-02-04 09:43:48.761574 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-02-04 09:43:48.761611 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-02-04 09:43:48.761635 | orchestrator | 2025-02-04 09:43:48.761657 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-04 09:43:48.761679 | orchestrator | Tuesday 04 February 2025 09:41:29 +0000 (0:00:01.063) 0:00:03.643 ****** 2025-02-04 09:43:48.761702 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:43:48.761726 | orchestrator | 2025-02-04 09:43:48.761747 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-04 09:43:48.761770 | orchestrator | Tuesday 04 February 2025 09:41:30 +0000 (0:00:00.936) 0:00:04.579 ****** 2025-02-04 09:43:48.761793 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.761813 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.761836 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.761860 | orchestrator | 2025-02-04 09:43:48.761885 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-04 09:43:48.761910 | orchestrator | Tuesday 04 February 2025 09:41:31 +0000 (0:00:00.747) 0:00:05.327 ****** 2025-02-04 09:43:48.761936 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.761960 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.762010 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.762128 | orchestrator | 2025-02-04 09:43:48.762156 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-04 09:43:48.762184 | orchestrator | Tuesday 04 February 2025 09:41:31 +0000 (0:00:00.348) 0:00:05.675 ****** 2025-02-04 09:43:48.762208 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.762264 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.762288 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.762314 | orchestrator | 2025-02-04 09:43:48.762341 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-04 09:43:48.762369 | orchestrator | Tuesday 04 February 2025 09:41:32 +0000 (0:00:00.982) 0:00:06.658 ****** 2025-02-04 09:43:48.762395 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.762419 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.762435 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.762456 | orchestrator | 2025-02-04 09:43:48.762481 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-04 09:43:48.762507 | orchestrator | Tuesday 04 February 2025 09:41:32 +0000 (0:00:00.340) 0:00:06.998 ****** 2025-02-04 09:43:48.762533 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.762559 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.762584 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.762610 | orchestrator | 2025-02-04 09:43:48.762636 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-04 09:43:48.762662 | orchestrator | Tuesday 04 February 2025 09:41:33 +0000 (0:00:00.372) 0:00:07.370 ****** 2025-02-04 09:43:48.762687 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.762713 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.762739 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.762764 | orchestrator | 2025-02-04 09:43:48.762784 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-04 09:43:48.762799 | orchestrator | Tuesday 04 February 2025 09:41:34 +0000 (0:00:00.674) 0:00:08.044 ****** 2025-02-04 09:43:48.762829 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.762845 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.762860 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.762874 | orchestrator | 2025-02-04 09:43:48.762888 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-04 09:43:48.762903 | orchestrator | Tuesday 04 February 2025 09:41:34 +0000 (0:00:00.365) 0:00:08.410 ****** 2025-02-04 09:43:48.762917 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.762931 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.762945 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.762959 | orchestrator | 2025-02-04 09:43:48.763012 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-04 09:43:48.763028 | orchestrator | Tuesday 04 February 2025 09:41:34 +0000 (0:00:00.366) 0:00:08.777 ****** 2025-02-04 09:43:48.763043 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:43:48.763057 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:43:48.763072 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:43:48.763086 | orchestrator | 2025-02-04 09:43:48.763100 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-04 09:43:48.763129 | orchestrator | Tuesday 04 February 2025 09:41:35 +0000 (0:00:01.219) 0:00:09.996 ****** 2025-02-04 09:43:48.763154 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.763169 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.763183 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.763197 | orchestrator | 2025-02-04 09:43:48.763211 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-04 09:43:48.763226 | orchestrator | Tuesday 04 February 2025 09:41:36 +0000 (0:00:00.996) 0:00:10.992 ****** 2025-02-04 09:43:48.763253 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:43:48.763268 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:43:48.763282 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:43:48.763296 | orchestrator | 2025-02-04 09:43:48.763310 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-04 09:43:48.763325 | orchestrator | Tuesday 04 February 2025 09:41:39 +0000 (0:00:02.800) 0:00:13.793 ****** 2025-02-04 09:43:48.763339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:43:48.763354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:43:48.763368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:43:48.763382 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.763397 | orchestrator | 2025-02-04 09:43:48.763411 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-04 09:43:48.763425 | orchestrator | Tuesday 04 February 2025 09:41:40 +0000 (0:00:00.673) 0:00:14.467 ****** 2025-02-04 09:43:48.763441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-04 09:43:48.763458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-04 09:43:48.763474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-04 09:43:48.763488 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.763510 | orchestrator | 2025-02-04 09:43:48.763524 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-04 09:43:48.763539 | orchestrator | Tuesday 04 February 2025 09:41:41 +0000 (0:00:00.758) 0:00:15.225 ****** 2025-02-04 09:43:48.763554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:43:48.763576 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:43:48.763592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:43:48.763606 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.763620 | orchestrator | 2025-02-04 09:43:48.763634 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-04 09:43:48.763649 | orchestrator | Tuesday 04 February 2025 09:41:41 +0000 (0:00:00.201) 0:00:15.426 ****** 2025-02-04 09:43:48.763665 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '8efb218d8299', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-04 09:41:38.237223', 'end': '2025-02-04 09:41:38.274758', 'delta': '0:00:00.037535', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8efb218d8299'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-04 09:43:48.763696 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '7f40cdf735f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-04 09:41:38.986636', 'end': '2025-02-04 09:41:39.028232', 'delta': '0:00:00.041596', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f40cdf735f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-04 09:43:48.763712 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '69fc3f27bad2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-04 09:41:39.611347', 'end': '2025-02-04 09:41:39.647900', 'delta': '0:00:00.036553', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69fc3f27bad2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-04 09:43:48.763734 | orchestrator | 2025-02-04 09:43:48.763748 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-04 09:43:48.763763 | orchestrator | Tuesday 04 February 2025 09:41:41 +0000 (0:00:00.220) 0:00:15.647 ****** 2025-02-04 09:43:48.763777 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.763791 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.763805 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.763819 | orchestrator | 2025-02-04 09:43:48.763834 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-04 09:43:48.763848 | orchestrator | Tuesday 04 February 2025 09:41:42 +0000 (0:00:00.594) 0:00:16.242 ****** 2025-02-04 09:43:48.763862 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-04 09:43:48.763876 | orchestrator | 2025-02-04 09:43:48.763890 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-04 09:43:48.763904 | orchestrator | Tuesday 04 February 2025 09:41:44 +0000 (0:00:01.767) 0:00:18.010 ****** 2025-02-04 09:43:48.763918 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.763932 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.763947 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.763961 | orchestrator | 2025-02-04 09:43:48.764000 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-04 09:43:48.764015 | orchestrator | Tuesday 04 February 2025 09:41:44 +0000 (0:00:00.443) 0:00:18.453 ****** 2025-02-04 09:43:48.764029 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764044 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764058 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764072 | orchestrator | 2025-02-04 09:43:48.764086 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-04 09:43:48.764100 | orchestrator | Tuesday 04 February 2025 09:41:45 +0000 (0:00:00.557) 0:00:19.010 ****** 2025-02-04 09:43:48.764114 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764128 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764143 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764157 | orchestrator | 2025-02-04 09:43:48.764171 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-04 09:43:48.764185 | orchestrator | Tuesday 04 February 2025 09:41:45 +0000 (0:00:00.396) 0:00:19.406 ****** 2025-02-04 09:43:48.764199 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.764214 | orchestrator | 2025-02-04 09:43:48.764233 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-04 09:43:48.764248 | orchestrator | Tuesday 04 February 2025 09:41:45 +0000 (0:00:00.156) 0:00:19.563 ****** 2025-02-04 09:43:48.764262 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764276 | orchestrator | 2025-02-04 09:43:48.764290 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-04 09:43:48.764304 | orchestrator | Tuesday 04 February 2025 09:41:45 +0000 (0:00:00.353) 0:00:19.916 ****** 2025-02-04 09:43:48.764318 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764332 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764347 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764361 | orchestrator | 2025-02-04 09:43:48.764375 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-04 09:43:48.764389 | orchestrator | Tuesday 04 February 2025 09:41:46 +0000 (0:00:00.629) 0:00:20.546 ****** 2025-02-04 09:43:48.764403 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764417 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764432 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764445 | orchestrator | 2025-02-04 09:43:48.764460 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-04 09:43:48.764474 | orchestrator | Tuesday 04 February 2025 09:41:46 +0000 (0:00:00.432) 0:00:20.978 ****** 2025-02-04 09:43:48.764488 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764502 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764522 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764537 | orchestrator | 2025-02-04 09:43:48.764551 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-04 09:43:48.764565 | orchestrator | Tuesday 04 February 2025 09:41:47 +0000 (0:00:00.473) 0:00:21.451 ****** 2025-02-04 09:43:48.764579 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764594 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764615 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764630 | orchestrator | 2025-02-04 09:43:48.764644 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-04 09:43:48.764659 | orchestrator | Tuesday 04 February 2025 09:41:47 +0000 (0:00:00.478) 0:00:21.930 ****** 2025-02-04 09:43:48.764673 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764693 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764707 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764721 | orchestrator | 2025-02-04 09:43:48.764736 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-04 09:43:48.764750 | orchestrator | Tuesday 04 February 2025 09:41:48 +0000 (0:00:00.693) 0:00:22.623 ****** 2025-02-04 09:43:48.764765 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764779 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764793 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764808 | orchestrator | 2025-02-04 09:43:48.764822 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-04 09:43:48.764836 | orchestrator | Tuesday 04 February 2025 09:41:49 +0000 (0:00:00.528) 0:00:23.152 ****** 2025-02-04 09:43:48.764850 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.764865 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.764879 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.764893 | orchestrator | 2025-02-04 09:43:48.764908 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-04 09:43:48.764922 | orchestrator | Tuesday 04 February 2025 09:41:49 +0000 (0:00:00.431) 0:00:23.584 ****** 2025-02-04 09:43:48.764937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8b56b489--397c--55c4--ba6f--4e97fbbc410a-osd--block--8b56b489--397c--55c4--ba6f--4e97fbbc410a', 'dm-uuid-LVM-B01lyk59fPYk0eb53k6ARpN49GK9UbktoCtaZuK3f9ilZ88Yz3DPGOJfJ7M72CzT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.764954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fd89a215--a86e--5b79--8dd1--0773a21fefe5-osd--block--fd89a215--a86e--5b79--8dd1--0773a21fefe5', 'dm-uuid-LVM-IkDaySRH5szLgU2bTjKQZ3ENZv3wycaqNKtcDSCduJglCEODV2KwhxRRIFL7pNKX'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part1', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part14', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part15', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part16', 'scsi-SQEMU_QEMU_HARDDISK_34756b4f-e35d-475a-95c3-a17bc4378557-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8b56b489--397c--55c4--ba6f--4e97fbbc410a-osd--block--8b56b489--397c--55c4--ba6f--4e97fbbc410a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4j33pY-HrV3-48cg-gClm-1bGn-ALC1-s4Wusx', 'scsi-0QEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68', 'scsi-SQEMU_QEMU_HARDDISK_3639d977-d811-449d-b930-d83a01ae7e68'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a9a0f878--ef24--53af--8bd4--10a12036221e-osd--block--a9a0f878--ef24--53af--8bd4--10a12036221e', 'dm-uuid-LVM-toIrQY9goYVMdA4cmLFXcIfN2LfLgZAyj38QOTwc46WWHX3hAIdk7Y68fCfrJ73X'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fd89a215--a86e--5b79--8dd1--0773a21fefe5-osd--block--fd89a215--a86e--5b79--8dd1--0773a21fefe5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RbZGLm-cfWK-0BdY-mNOU-51Ag-ss06-oFGHmb', 'scsi-0QEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b', 'scsi-SQEMU_QEMU_HARDDISK_77d1cf45-53d9-435f-b362-8711a42fa03b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe', 'scsi-SQEMU_QEMU_HARDDISK_5ef04da4-33c0-4c31-8f35-70c17ff294fe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--857e455f--002b--509a--b66d--9c4a1025daeb-osd--block--857e455f--002b--509a--b66d--9c4a1025daeb', 'dm-uuid-LVM-1HZKtUa3373KDVkDkv437mab9N8siFTP3p90pYgh2XLoEmLVDzYugBvWI9Ll2Pun'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765305 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.765326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39-osd--block--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39', 'dm-uuid-LVM-fGnccIHSu83Z1tRPuKlHYCH08p8E2cIuC06fj2NOjjTwp3wqLT4OmeQMLVcBrQOu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part1', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part14', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part15', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part16', 'scsi-SQEMU_QEMU_HARDDISK_65dbfb35-c088-49c9-9717-b00e675ef863-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89dbb78a--6e2f--596a--9aad--74f54f8525ce-osd--block--89dbb78a--6e2f--596a--9aad--74f54f8525ce', 'dm-uuid-LVM-3riqEx7zrBBhsVWPuFVDVu06C9kXsbHwNmX9ZVi0vsW7DR7YKDZnsvcSRAnIq8vW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a9a0f878--ef24--53af--8bd4--10a12036221e-osd--block--a9a0f878--ef24--53af--8bd4--10a12036221e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SyGV31-y20g-Nbpy-rOOh-TXyN-2Jia-a0zVGL', 'scsi-0QEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8', 'scsi-SQEMU_QEMU_HARDDISK_d5e896df-3760-43bc-823d-dd864c8452e8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--857e455f--002b--509a--b66d--9c4a1025daeb-osd--block--857e455f--002b--509a--b66d--9c4a1025daeb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S7LGzz-ZU50-C9Ox-v5WD-WWVD-BOCx-m3V4Xc', 'scsi-0QEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae', 'scsi-SQEMU_QEMU_HARDDISK_81f63dc5-7b43-4c99-9b7b-2b520b540dae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40', 'scsi-SQEMU_QEMU_HARDDISK_c8a0131d-fae0-46a9-a275-20bf3d241b40'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765639 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.765654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:43:48.765719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part1', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part14', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part15', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part16', 'scsi-SQEMU_QEMU_HARDDISK_a424fac1-723d-4e26-82ac-15e9ac8e6afc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39-osd--block--25e96ed1--6b8f--57c8--bdd9--51fb1c446a39'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KASbhx-kt6q-3lXL-AKq7-KcOb-OQf8-r6zGzT', 'scsi-0QEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a', 'scsi-SQEMU_QEMU_HARDDISK_d26fda4b-4cd5-4c78-8c80-a561505edb1a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--89dbb78a--6e2f--596a--9aad--74f54f8525ce-osd--block--89dbb78a--6e2f--596a--9aad--74f54f8525ce'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-AewPDk-8lGP-42I1-i41I-DT0o-5y8p-y7t0Gg', 'scsi-0QEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa', 'scsi-SQEMU_QEMU_HARDDISK_6f1478d2-b213-4f65-abc0-539a0d8b61fa'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543', 'scsi-SQEMU_QEMU_HARDDISK_2e725b5a-39a0-4c9f-add8-ff554d181543'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:43:48.765810 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.765825 | orchestrator | 2025-02-04 09:43:48.765844 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-04 09:43:48.765869 | orchestrator | Tuesday 04 February 2025 09:41:50 +0000 (0:00:01.401) 0:00:24.985 ****** 2025-02-04 09:43:48.765891 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-02-04 09:43:48.765906 | orchestrator | 2025-02-04 09:43:48.765920 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-04 09:43:48.765934 | orchestrator | Tuesday 04 February 2025 09:41:52 +0000 (0:00:01.489) 0:00:26.474 ****** 2025-02-04 09:43:48.765948 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.765963 | orchestrator | 2025-02-04 09:43:48.766045 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-04 09:43:48.766063 | orchestrator | Tuesday 04 February 2025 09:41:52 +0000 (0:00:00.207) 0:00:26.681 ****** 2025-02-04 09:43:48.766077 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.766092 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.766106 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.766120 | orchestrator | 2025-02-04 09:43:48.766135 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-04 09:43:48.766150 | orchestrator | Tuesday 04 February 2025 09:41:53 +0000 (0:00:00.623) 0:00:27.305 ****** 2025-02-04 09:43:48.766164 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.766178 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.766192 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.766206 | orchestrator | 2025-02-04 09:43:48.766220 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-04 09:43:48.766235 | orchestrator | Tuesday 04 February 2025 09:41:54 +0000 (0:00:00.846) 0:00:28.152 ****** 2025-02-04 09:43:48.766249 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.766263 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.766278 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.766292 | orchestrator | 2025-02-04 09:43:48.766306 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-04 09:43:48.766320 | orchestrator | Tuesday 04 February 2025 09:41:54 +0000 (0:00:00.634) 0:00:28.787 ****** 2025-02-04 09:43:48.766334 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.766348 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.766362 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.766376 | orchestrator | 2025-02-04 09:43:48.766390 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-04 09:43:48.766404 | orchestrator | Tuesday 04 February 2025 09:41:56 +0000 (0:00:01.898) 0:00:30.685 ****** 2025-02-04 09:43:48.766419 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.766433 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.766447 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.766461 | orchestrator | 2025-02-04 09:43:48.766482 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-04 09:43:48.766496 | orchestrator | Tuesday 04 February 2025 09:41:57 +0000 (0:00:00.417) 0:00:31.103 ****** 2025-02-04 09:43:48.766510 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.766525 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.766539 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.766553 | orchestrator | 2025-02-04 09:43:48.766567 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-04 09:43:48.766582 | orchestrator | Tuesday 04 February 2025 09:41:57 +0000 (0:00:00.562) 0:00:31.665 ****** 2025-02-04 09:43:48.766596 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.766610 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.766624 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.766638 | orchestrator | 2025-02-04 09:43:48.766653 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-04 09:43:48.766667 | orchestrator | Tuesday 04 February 2025 09:41:58 +0000 (0:00:00.748) 0:00:32.414 ****** 2025-02-04 09:43:48.766681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:43:48.766696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:43:48.766710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:43:48.766725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:43:48.766739 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:43:48.766753 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.766767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:43:48.766782 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:43:48.766796 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.766817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:43:48.766832 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:43:48.766846 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.766860 | orchestrator | 2025-02-04 09:43:48.766875 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-04 09:43:48.766897 | orchestrator | Tuesday 04 February 2025 09:41:59 +0000 (0:00:01.419) 0:00:33.833 ****** 2025-02-04 09:43:48.766912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:43:48.766927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:43:48.766941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:43:48.766956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:43:48.767035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:43:48.767063 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.767082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:43:48.767097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:43:48.767111 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.767125 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:43:48.767139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:43:48.767153 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.767167 | orchestrator | 2025-02-04 09:43:48.767182 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-04 09:43:48.767196 | orchestrator | Tuesday 04 February 2025 09:42:00 +0000 (0:00:01.071) 0:00:34.905 ****** 2025-02-04 09:43:48.767210 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-02-04 09:43:48.767225 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-02-04 09:43:48.767239 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-02-04 09:43:48.767253 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-02-04 09:43:48.767268 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-02-04 09:43:48.767282 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-02-04 09:43:48.767296 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-02-04 09:43:48.767310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-02-04 09:43:48.767324 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-02-04 09:43:48.767338 | orchestrator | 2025-02-04 09:43:48.767352 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-04 09:43:48.767366 | orchestrator | Tuesday 04 February 2025 09:42:03 +0000 (0:00:02.268) 0:00:37.174 ****** 2025-02-04 09:43:48.767380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:43:48.767395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:43:48.767409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:43:48.767423 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:43:48.767437 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.767452 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:43:48.767466 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:43:48.767480 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.767494 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:43:48.767508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:43:48.767522 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:43:48.767536 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.767550 | orchestrator | 2025-02-04 09:43:48.767563 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-04 09:43:48.767576 | orchestrator | Tuesday 04 February 2025 09:42:03 +0000 (0:00:00.505) 0:00:37.680 ****** 2025-02-04 09:43:48.767596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-02-04 09:43:48.767609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-02-04 09:43:48.767622 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-02-04 09:43:48.767640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-02-04 09:43:48.767653 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-02-04 09:43:48.767666 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-02-04 09:43:48.767678 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.767691 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.767704 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-02-04 09:43:48.767716 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-02-04 09:43:48.767729 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-02-04 09:43:48.767742 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.767754 | orchestrator | 2025-02-04 09:43:48.767767 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-04 09:43:48.767779 | orchestrator | Tuesday 04 February 2025 09:42:04 +0000 (0:00:00.654) 0:00:38.334 ****** 2025-02-04 09:43:48.767792 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:43:48.767805 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:43:48.767818 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:43:48.767831 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.767844 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:43:48.767856 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:43:48.767869 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-02-04 09:43:48.767882 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:43:48.767901 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:43:48.767914 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.767927 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:43:48.767939 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.767952 | orchestrator | 2025-02-04 09:43:48.767965 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-04 09:43:48.768000 | orchestrator | Tuesday 04 February 2025 09:42:04 +0000 (0:00:00.449) 0:00:38.783 ****** 2025-02-04 09:43:48.768013 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:43:48.768026 | orchestrator | 2025-02-04 09:43:48.768039 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-02-04 09:43:48.768051 | orchestrator | Tuesday 04 February 2025 09:42:05 +0000 (0:00:00.940) 0:00:39.724 ****** 2025-02-04 09:43:48.768064 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768077 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.768089 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.768108 | orchestrator | 2025-02-04 09:43:48.768120 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-02-04 09:43:48.768133 | orchestrator | Tuesday 04 February 2025 09:42:06 +0000 (0:00:00.429) 0:00:40.153 ****** 2025-02-04 09:43:48.768146 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768159 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.768172 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.768184 | orchestrator | 2025-02-04 09:43:48.768197 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-02-04 09:43:48.768216 | orchestrator | Tuesday 04 February 2025 09:42:06 +0000 (0:00:00.420) 0:00:40.574 ****** 2025-02-04 09:43:48.768229 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768241 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.768254 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.768273 | orchestrator | 2025-02-04 09:43:48.768286 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-02-04 09:43:48.768299 | orchestrator | Tuesday 04 February 2025 09:42:07 +0000 (0:00:00.857) 0:00:41.432 ****** 2025-02-04 09:43:48.768312 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.768324 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.768337 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.768349 | orchestrator | 2025-02-04 09:43:48.768362 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-02-04 09:43:48.768375 | orchestrator | Tuesday 04 February 2025 09:42:08 +0000 (0:00:00.978) 0:00:42.410 ****** 2025-02-04 09:43:48.768387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:43:48.768400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:43:48.768413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:43:48.768426 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768438 | orchestrator | 2025-02-04 09:43:48.768451 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-02-04 09:43:48.768464 | orchestrator | Tuesday 04 February 2025 09:42:09 +0000 (0:00:00.603) 0:00:43.014 ****** 2025-02-04 09:43:48.768476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:43:48.768489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:43:48.768502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:43:48.768515 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768527 | orchestrator | 2025-02-04 09:43:48.768540 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-02-04 09:43:48.768553 | orchestrator | Tuesday 04 February 2025 09:42:09 +0000 (0:00:00.476) 0:00:43.491 ****** 2025-02-04 09:43:48.768565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:43:48.768578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:43:48.768591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:43:48.768604 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768616 | orchestrator | 2025-02-04 09:43:48.768629 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:43:48.768686 | orchestrator | Tuesday 04 February 2025 09:42:09 +0000 (0:00:00.489) 0:00:43.980 ****** 2025-02-04 09:43:48.768701 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:43:48.768714 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:43:48.768726 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:43:48.768739 | orchestrator | 2025-02-04 09:43:48.768752 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-02-04 09:43:48.768764 | orchestrator | Tuesday 04 February 2025 09:42:10 +0000 (0:00:00.518) 0:00:44.499 ****** 2025-02-04 09:43:48.768777 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-02-04 09:43:48.768790 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-02-04 09:43:48.768802 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-02-04 09:43:48.768815 | orchestrator | 2025-02-04 09:43:48.768827 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-02-04 09:43:48.768840 | orchestrator | Tuesday 04 February 2025 09:42:12 +0000 (0:00:01.518) 0:00:46.018 ****** 2025-02-04 09:43:48.768852 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768865 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.768877 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.768890 | orchestrator | 2025-02-04 09:43:48.768903 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-02-04 09:43:48.768927 | orchestrator | Tuesday 04 February 2025 09:42:12 +0000 (0:00:00.492) 0:00:46.510 ****** 2025-02-04 09:43:48.768939 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.768952 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.768965 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.769031 | orchestrator | 2025-02-04 09:43:48.769045 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-02-04 09:43:48.769066 | orchestrator | Tuesday 04 February 2025 09:42:13 +0000 (0:00:00.525) 0:00:47.035 ****** 2025-02-04 09:43:48.769080 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-02-04 09:43:48.769093 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.769107 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-02-04 09:43:48.769121 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.769134 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-02-04 09:43:48.769147 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.769161 | orchestrator | 2025-02-04 09:43:48.769174 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-02-04 09:43:48.769187 | orchestrator | Tuesday 04 February 2025 09:42:14 +0000 (0:00:01.356) 0:00:48.392 ****** 2025-02-04 09:43:48.769201 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-02-04 09:43:48.769214 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.769227 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-02-04 09:43:48.769240 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.769254 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-02-04 09:43:48.769267 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.769280 | orchestrator | 2025-02-04 09:43:48.769293 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-02-04 09:43:48.769307 | orchestrator | Tuesday 04 February 2025 09:42:14 +0000 (0:00:00.560) 0:00:48.952 ****** 2025-02-04 09:43:48.769320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-02-04 09:43:48.769333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-02-04 09:43:48.769347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-02-04 09:43:48.769360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-02-04 09:43:48.769373 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.769387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-02-04 09:43:48.769400 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-02-04 09:43:48.769414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-02-04 09:43:48.769427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-02-04 09:43:48.769440 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.769454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-02-04 09:43:48.769467 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.769480 | orchestrator | 2025-02-04 09:43:48.769493 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-04 09:43:48.769507 | orchestrator | Tuesday 04 February 2025 09:42:15 +0000 (0:00:01.016) 0:00:49.969 ****** 2025-02-04 09:43:48.769520 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.769533 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.769544 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:43:48.769555 | orchestrator | 2025-02-04 09:43:48.769565 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-04 09:43:48.769576 | orchestrator | Tuesday 04 February 2025 09:42:16 +0000 (0:00:00.430) 0:00:50.400 ****** 2025-02-04 09:43:48.769587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:43:48.769604 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:43:48.769615 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:43:48.769625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-04 09:43:48.769636 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-04 09:43:48.769647 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-04 09:43:48.769658 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-04 09:43:48.769668 | orchestrator | 2025-02-04 09:43:48.769679 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-04 09:43:48.769690 | orchestrator | Tuesday 04 February 2025 09:42:17 +0000 (0:00:01.397) 0:00:51.797 ****** 2025-02-04 09:43:48.769701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-02-04 09:43:48.769711 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:43:48.769722 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:43:48.769733 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-02-04 09:43:48.769744 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-04 09:43:48.769755 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-04 09:43:48.769765 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-04 09:43:48.769776 | orchestrator | 2025-02-04 09:43:48.769791 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-02-04 09:43:48.769802 | orchestrator | Tuesday 04 February 2025 09:42:19 +0000 (0:00:01.791) 0:00:53.589 ****** 2025-02-04 09:43:48.769813 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:43:48.769824 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:43:48.769834 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-02-04 09:43:48.769845 | orchestrator | 2025-02-04 09:43:48.769856 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-02-04 09:43:48.769871 | orchestrator | Tuesday 04 February 2025 09:42:20 +0000 (0:00:00.560) 0:00:54.149 ****** 2025-02-04 09:43:48.769884 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:43:48.769897 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:43:48.769908 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:43:48.769919 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:43:48.769930 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-02-04 09:43:48.769946 | orchestrator | 2025-02-04 09:43:48.769957 | orchestrator | TASK [generate keys] *********************************************************** 2025-02-04 09:43:48.769985 | orchestrator | Tuesday 04 February 2025 09:42:57 +0000 (0:00:37.508) 0:01:31.658 ****** 2025-02-04 09:43:48.769996 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770007 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770037 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770048 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770069 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770079 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-02-04 09:43:48.770089 | orchestrator | 2025-02-04 09:43:48.770099 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-02-04 09:43:48.770110 | orchestrator | Tuesday 04 February 2025 09:43:16 +0000 (0:00:18.764) 0:01:50.422 ****** 2025-02-04 09:43:48.770120 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770130 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770140 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770151 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770161 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770171 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770182 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-02-04 09:43:48.770192 | orchestrator | 2025-02-04 09:43:48.770202 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-02-04 09:43:48.770212 | orchestrator | Tuesday 04 February 2025 09:43:26 +0000 (0:00:09.876) 0:02:00.299 ****** 2025-02-04 09:43:48.770223 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770233 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-04 09:43:48.770243 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-04 09:43:48.770253 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770264 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-04 09:43:48.770274 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-04 09:43:48.770284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770295 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-04 09:43:48.770305 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-04 09:43:48.770315 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:48.770330 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-04 09:43:48.770346 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-04 09:43:51.805332 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:51.805453 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-04 09:43:51.805472 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-04 09:43:51.805487 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-02-04 09:43:51.805528 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-02-04 09:43:51.805543 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-02-04 09:43:51.805557 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-02-04 09:43:51.805572 | orchestrator | 2025-02-04 09:43:51.805587 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:43:51.805603 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-02-04 09:43:51.805619 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-02-04 09:43:51.805633 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-02-04 09:43:51.805647 | orchestrator | 2025-02-04 09:43:51.805661 | orchestrator | 2025-02-04 09:43:51.805675 | orchestrator | 2025-02-04 09:43:51.805689 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:43:51.805703 | orchestrator | Tuesday 04 February 2025 09:43:45 +0000 (0:00:19.114) 0:02:19.413 ****** 2025-02-04 09:43:51.805717 | orchestrator | =============================================================================== 2025-02-04 09:43:51.805731 | orchestrator | create openstack pool(s) ----------------------------------------------- 37.51s 2025-02-04 09:43:51.805745 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 19.11s 2025-02-04 09:43:51.805760 | orchestrator | generate keys ---------------------------------------------------------- 18.76s 2025-02-04 09:43:51.805774 | orchestrator | get keys from monitors -------------------------------------------------- 9.88s 2025-02-04 09:43:51.805788 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.80s 2025-02-04 09:43:51.805802 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.27s 2025-02-04 09:43:51.805816 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 1.90s 2025-02-04 09:43:51.805834 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.79s 2025-02-04 09:43:51.805857 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.77s 2025-02-04 09:43:51.805881 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 1.52s 2025-02-04 09:43:51.805904 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.49s 2025-02-04 09:43:51.806135 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.42s 2025-02-04 09:43:51.806150 | orchestrator | ceph-facts : set_fact devices generate device list when osd_auto_discovery --- 1.40s 2025-02-04 09:43:51.806165 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.40s 2025-02-04 09:43:51.806179 | orchestrator | ceph-facts : set_fact rgw_instances with rgw multisite ------------------ 1.36s 2025-02-04 09:43:51.806193 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 1.22s 2025-02-04 09:43:51.806207 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 1.07s 2025-02-04 09:43:51.806221 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 1.06s 2025-02-04 09:43:51.806235 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 1.02s 2025-02-04 09:43:51.806249 | orchestrator | ceph-facts : set_fact container_exec_cmd -------------------------------- 1.00s 2025-02-04 09:43:51.806264 | orchestrator | 2025-02-04 09:43:48 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:51.806279 | orchestrator | 2025-02-04 09:43:48 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:51.806309 | orchestrator | 2025-02-04 09:43:48 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:51.806336 | orchestrator | 2025-02-04 09:43:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:51.806374 | orchestrator | 2025-02-04 09:43:51 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:51.806793 | orchestrator | 2025-02-04 09:43:51 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:43:51.806833 | orchestrator | 2025-02-04 09:43:51 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:51.806860 | orchestrator | 2025-02-04 09:43:51 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:51.807925 | orchestrator | 2025-02-04 09:43:51 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:54.848897 | orchestrator | 2025-02-04 09:43:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:54.849224 | orchestrator | 2025-02-04 09:43:54 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:54.851004 | orchestrator | 2025-02-04 09:43:54 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:43:54.851100 | orchestrator | 2025-02-04 09:43:54 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:54.852523 | orchestrator | 2025-02-04 09:43:54 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:54.852632 | orchestrator | 2025-02-04 09:43:54 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:57.895745 | orchestrator | 2025-02-04 09:43:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:43:57.895884 | orchestrator | 2025-02-04 09:43:57 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:43:57.897087 | orchestrator | 2025-02-04 09:43:57 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:43:57.899189 | orchestrator | 2025-02-04 09:43:57 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:43:57.900435 | orchestrator | 2025-02-04 09:43:57 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:43:57.901927 | orchestrator | 2025-02-04 09:43:57 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:43:57.903409 | orchestrator | 2025-02-04 09:43:57 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:00.945047 | orchestrator | 2025-02-04 09:43:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:00.945172 | orchestrator | 2025-02-04 09:44:00 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:44:00.947121 | orchestrator | 2025-02-04 09:44:00 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:00.948402 | orchestrator | 2025-02-04 09:44:00 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:44:00.949927 | orchestrator | 2025-02-04 09:44:00 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:00.950601 | orchestrator | 2025-02-04 09:44:00 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:44:00.951644 | orchestrator | 2025-02-04 09:44:00 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:00.952015 | orchestrator | 2025-02-04 09:44:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:03.989871 | orchestrator | 2025-02-04 09:44:03 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state STARTED 2025-02-04 09:44:03.991561 | orchestrator | 2025-02-04 09:44:03 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:03.993777 | orchestrator | 2025-02-04 09:44:03 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state STARTED 2025-02-04 09:44:03.995861 | orchestrator | 2025-02-04 09:44:03 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:03.997627 | orchestrator | 2025-02-04 09:44:03 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:44:03.999000 | orchestrator | 2025-02-04 09:44:03 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:03.999278 | orchestrator | 2025-02-04 09:44:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:07.062374 | orchestrator | 2025-02-04 09:44:07.062500 | orchestrator | 2025-02-04 09:44:07.062522 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:44:07.062538 | orchestrator | 2025-02-04 09:44:07.062553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:44:07.062568 | orchestrator | Tuesday 04 February 2025 09:42:44 +0000 (0:00:00.667) 0:00:00.667 ****** 2025-02-04 09:44:07.062583 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:07.062598 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:44:07.062612 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:44:07.062627 | orchestrator | 2025-02-04 09:44:07.062642 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:44:07.062657 | orchestrator | Tuesday 04 February 2025 09:42:44 +0000 (0:00:00.570) 0:00:01.238 ****** 2025-02-04 09:44:07.062671 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-02-04 09:44:07.062685 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-02-04 09:44:07.062700 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-02-04 09:44:07.062714 | orchestrator | 2025-02-04 09:44:07.062728 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-02-04 09:44:07.062742 | orchestrator | 2025-02-04 09:44:07.062757 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-02-04 09:44:07.062771 | orchestrator | Tuesday 04 February 2025 09:42:45 +0000 (0:00:00.942) 0:00:02.181 ****** 2025-02-04 09:44:07.062786 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:44:07.062801 | orchestrator | 2025-02-04 09:44:07.062815 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-02-04 09:44:07.062829 | orchestrator | Tuesday 04 February 2025 09:42:46 +0000 (0:00:00.772) 0:00:02.953 ****** 2025-02-04 09:44:07.062843 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (5 retries left). 2025-02-04 09:44:07.062858 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (4 retries left). 2025-02-04 09:44:07.062872 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (3 retries left). 2025-02-04 09:44:07.062886 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (2 retries left). 2025-02-04 09:44:07.062900 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating services (1 retries left). 2025-02-04 09:44:07.063050 | orchestrator | failed: [testbed-node-0] (item=barbican (key-manager)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Barbican Key Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9311"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9311"}], "name": "barbican", "type": "key-manager"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662241.5898657-4107-14295582693816/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662241.5898657-4107-14295582693816/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662241.5898657-4107-14295582693816/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_94702mk3/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_94702mk3/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_94702mk3/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_94702mk3/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_94702mk3/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:44:07.063114 | orchestrator | 2025-02-04 09:44:07.063130 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:44:07.063145 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:44:07.063161 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:44:07.063176 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:44:07.063191 | orchestrator | 2025-02-04 09:44:07.063205 | orchestrator | 2025-02-04 09:44:07.063219 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:44:07.063234 | orchestrator | Tuesday 04 February 2025 09:44:05 +0000 (0:01:18.534) 0:01:21.488 ****** 2025-02-04 09:44:07.063256 | orchestrator | =============================================================================== 2025-02-04 09:44:07.065805 | orchestrator | service-ks-register : barbican | Creating services --------------------- 78.53s 2025-02-04 09:44:07.065837 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-02-04 09:44:07.065852 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.77s 2025-02-04 09:44:07.065866 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-02-04 09:44:07.065887 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task e5ff8f94-29eb-4bb7-8960-82e778b51d71 is in state SUCCESS 2025-02-04 09:44:07.066329 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:07.066360 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:07.066376 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:07.066398 | orchestrator | 2025-02-04 09:44:07.066414 | orchestrator | 2025-02-04 09:44:07.066430 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:44:07.066445 | orchestrator | 2025-02-04 09:44:07.066461 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:44:07.066476 | orchestrator | Tuesday 04 February 2025 09:42:43 +0000 (0:00:00.594) 0:00:00.594 ****** 2025-02-04 09:44:07.066491 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:07.066508 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:44:07.066523 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:44:07.066538 | orchestrator | 2025-02-04 09:44:07.066554 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:44:07.066569 | orchestrator | Tuesday 04 February 2025 09:42:43 +0000 (0:00:00.668) 0:00:01.262 ****** 2025-02-04 09:44:07.066585 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-02-04 09:44:07.066600 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-02-04 09:44:07.066629 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-02-04 09:44:07.066644 | orchestrator | 2025-02-04 09:44:07.066660 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-02-04 09:44:07.066675 | orchestrator | 2025-02-04 09:44:07.066690 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-02-04 09:44:07.066705 | orchestrator | Tuesday 04 February 2025 09:42:44 +0000 (0:00:00.601) 0:00:01.864 ****** 2025-02-04 09:44:07.066720 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:44:07.066736 | orchestrator | 2025-02-04 09:44:07.066751 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-02-04 09:44:07.066775 | orchestrator | Tuesday 04 February 2025 09:42:45 +0000 (0:00:01.073) 0:00:02.938 ****** 2025-02-04 09:44:07.066791 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (5 retries left). 2025-02-04 09:44:07.066807 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (4 retries left). 2025-02-04 09:44:07.066822 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (3 retries left). 2025-02-04 09:44:07.066837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (2 retries left). 2025-02-04 09:44:07.066852 | orchestrator | FAILED - RETRYING: [testbed-node-0]: designate | Creating services (1 retries left). 2025-02-04 09:44:07.066920 | orchestrator | failed: [testbed-node-0] (item=designate (dns)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Designate DNS Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9001"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9001"}], "name": "designate", "type": "dns"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662241.4185138-4096-115003963791451/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662241.4185138-4096-115003963791451/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662241.4185138-4096-115003963791451/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_iteov2kl/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_iteov2kl/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_iteov2kl/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_iteov2kl/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_iteov2kl/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:44:07.066983 | orchestrator | 2025-02-04 09:44:07.067010 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:44:07.067037 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:44:07.067062 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:44:07.067094 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:44:07.067112 | orchestrator | 2025-02-04 09:44:07.067128 | orchestrator | 2025-02-04 09:44:07.067145 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:44:07.067161 | orchestrator | Tuesday 04 February 2025 09:44:05 +0000 (0:01:19.373) 0:01:22.312 ****** 2025-02-04 09:44:07.067191 | orchestrator | =============================================================================== 2025-02-04 09:44:10.116486 | orchestrator | service-ks-register : designate | Creating services -------------------- 79.37s 2025-02-04 09:44:10.116716 | orchestrator | designate : include_tasks ----------------------------------------------- 1.07s 2025-02-04 09:44:10.116738 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2025-02-04 09:44:10.116754 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-02-04 09:44:10.116770 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task 71254739-7aed-4310-944a-43301f546576 is in state SUCCESS 2025-02-04 09:44:10.116785 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:10.116801 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:44:10.116816 | orchestrator | 2025-02-04 09:44:07 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:10.116831 | orchestrator | 2025-02-04 09:44:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:10.116865 | orchestrator | 2025-02-04 09:44:10 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:10.120304 | orchestrator | 2025-02-04 09:44:10 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:10.120358 | orchestrator | 2025-02-04 09:44:10 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:13.167430 | orchestrator | 2025-02-04 09:44:10 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:13.167532 | orchestrator | 2025-02-04 09:44:10 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:44:13.167544 | orchestrator | 2025-02-04 09:44:10 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:13.167554 | orchestrator | 2025-02-04 09:44:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:13.167575 | orchestrator | 2025-02-04 09:44:13 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:13.168480 | orchestrator | 2025-02-04 09:44:13 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:13.170605 | orchestrator | 2025-02-04 09:44:13 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:13.171783 | orchestrator | 2025-02-04 09:44:13 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:13.174142 | orchestrator | 2025-02-04 09:44:13 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state STARTED 2025-02-04 09:44:13.175815 | orchestrator | 2025-02-04 09:44:13 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:16.214331 | orchestrator | 2025-02-04 09:44:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:16.214474 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:16.215184 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:16.215292 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:16.215890 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:16.217118 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:16.219014 | orchestrator | 2025-02-04 09:44:16.219055 | orchestrator | 2025-02-04 09:44:16.219063 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:44:16.219071 | orchestrator | 2025-02-04 09:44:16.219077 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:44:16.219084 | orchestrator | Tuesday 04 February 2025 09:42:44 +0000 (0:00:00.470) 0:00:00.470 ****** 2025-02-04 09:44:16.219090 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:16.219098 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:44:16.219104 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:44:16.219111 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:44:16.219117 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:44:16.219124 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:44:16.219130 | orchestrator | 2025-02-04 09:44:16.219136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:44:16.219143 | orchestrator | Tuesday 04 February 2025 09:42:46 +0000 (0:00:01.551) 0:00:02.022 ****** 2025-02-04 09:44:16.219149 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-02-04 09:44:16.219156 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-02-04 09:44:16.219162 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-02-04 09:44:16.219169 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-02-04 09:44:16.219175 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-02-04 09:44:16.219181 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-02-04 09:44:16.219187 | orchestrator | 2025-02-04 09:44:16.219194 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-02-04 09:44:16.219200 | orchestrator | 2025-02-04 09:44:16.219206 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-02-04 09:44:16.219212 | orchestrator | Tuesday 04 February 2025 09:42:46 +0000 (0:00:00.843) 0:00:02.865 ****** 2025-02-04 09:44:16.219220 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:44:16.219227 | orchestrator | 2025-02-04 09:44:16.219233 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-02-04 09:44:16.219239 | orchestrator | Tuesday 04 February 2025 09:42:48 +0000 (0:00:01.571) 0:00:04.437 ****** 2025-02-04 09:44:16.219246 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:44:16.219252 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:16.219272 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:44:16.219279 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:44:16.219285 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:44:16.219291 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:44:16.219297 | orchestrator | 2025-02-04 09:44:16.219304 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-02-04 09:44:16.219310 | orchestrator | Tuesday 04 February 2025 09:42:50 +0000 (0:00:01.834) 0:00:06.271 ****** 2025-02-04 09:44:16.219316 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:16.219322 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:44:16.219328 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:44:16.219335 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:44:16.219341 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:44:16.219347 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:44:16.219353 | orchestrator | 2025-02-04 09:44:16.219360 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-02-04 09:44:16.219366 | orchestrator | Tuesday 04 February 2025 09:42:51 +0000 (0:00:01.440) 0:00:07.711 ****** 2025-02-04 09:44:16.219372 | orchestrator | ok: [testbed-node-0] => { 2025-02-04 09:44:16.219379 | orchestrator |  "changed": false, 2025-02-04 09:44:16.219386 | orchestrator |  "msg": "All assertions passed" 2025-02-04 09:44:16.219392 | orchestrator | } 2025-02-04 09:44:16.219399 | orchestrator | ok: [testbed-node-1] => { 2025-02-04 09:44:16.219405 | orchestrator |  "changed": false, 2025-02-04 09:44:16.219411 | orchestrator |  "msg": "All assertions passed" 2025-02-04 09:44:16.219417 | orchestrator | } 2025-02-04 09:44:16.219432 | orchestrator | ok: [testbed-node-2] => { 2025-02-04 09:44:16.219439 | orchestrator |  "changed": false, 2025-02-04 09:44:16.219445 | orchestrator |  "msg": "All assertions passed" 2025-02-04 09:44:16.219451 | orchestrator | } 2025-02-04 09:44:16.219457 | orchestrator | ok: [testbed-node-3] => { 2025-02-04 09:44:16.219464 | orchestrator |  "changed": false, 2025-02-04 09:44:16.219470 | orchestrator |  "msg": "All assertions passed" 2025-02-04 09:44:16.219476 | orchestrator | } 2025-02-04 09:44:16.219483 | orchestrator | ok: [testbed-node-4] => { 2025-02-04 09:44:16.219489 | orchestrator |  "changed": false, 2025-02-04 09:44:16.219495 | orchestrator |  "msg": "All assertions passed" 2025-02-04 09:44:16.219501 | orchestrator | } 2025-02-04 09:44:16.219508 | orchestrator | ok: [testbed-node-5] => { 2025-02-04 09:44:16.219514 | orchestrator |  "changed": false, 2025-02-04 09:44:16.219520 | orchestrator |  "msg": "All assertions passed" 2025-02-04 09:44:16.219526 | orchestrator | } 2025-02-04 09:44:16.219532 | orchestrator | 2025-02-04 09:44:16.219539 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-02-04 09:44:16.219545 | orchestrator | Tuesday 04 February 2025 09:42:52 +0000 (0:00:01.019) 0:00:08.730 ****** 2025-02-04 09:44:16.219551 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:16.219558 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:44:16.219564 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:44:16.219570 | orchestrator | skipping: [testbed-node-3] 2025-02-04 09:44:16.219576 | orchestrator | skipping: [testbed-node-4] 2025-02-04 09:44:16.219583 | orchestrator | skipping: [testbed-node-5] 2025-02-04 09:44:16.219589 | orchestrator | 2025-02-04 09:44:16.219596 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-02-04 09:44:16.219603 | orchestrator | Tuesday 04 February 2025 09:42:53 +0000 (0:00:00.878) 0:00:09.609 ****** 2025-02-04 09:44:16.219609 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (5 retries left). 2025-02-04 09:44:16.219617 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (4 retries left). 2025-02-04 09:44:16.219624 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (3 retries left). 2025-02-04 09:44:16.219631 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (2 retries left). 2025-02-04 09:44:16.219641 | orchestrator | FAILED - RETRYING: [testbed-node-0]: neutron | Creating services (1 retries left). 2025-02-04 09:44:16.219699 | orchestrator | failed: [testbed-node-0] (item=neutron (network)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Networking", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9696"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9696"}], "name": "neutron", "type": "network"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662250.755091-4181-171687227760201/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662250.755091-4181-171687227760201/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662250.755091-4181-171687227760201/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_1e2dfq0d/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_1e2dfq0d/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_1e2dfq0d/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_1e2dfq0d/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_1e2dfq0d/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:44:19.258433 | orchestrator | 2025-02-04 09:44:19.258531 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:44:19.258566 | orchestrator | testbed-node-0 : ok=6  changed=0 unreachable=0 failed=1  skipped=1  rescued=0 ignored=0 2025-02-04 09:44:19.258580 | orchestrator | testbed-node-1 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:44:19.258593 | orchestrator | testbed-node-2 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:44:19.258605 | orchestrator | testbed-node-3 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:44:19.258617 | orchestrator | testbed-node-4 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:44:19.258629 | orchestrator | testbed-node-5 : ok=6  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:44:19.258641 | orchestrator | 2025-02-04 09:44:19.258653 | orchestrator | 2025-02-04 09:44:19.258665 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:44:19.258677 | orchestrator | Tuesday 04 February 2025 09:44:14 +0000 (0:01:20.763) 0:01:30.372 ****** 2025-02-04 09:44:19.258692 | orchestrator | =============================================================================== 2025-02-04 09:44:19.258704 | orchestrator | service-ks-register : neutron | Creating services ---------------------- 80.76s 2025-02-04 09:44:19.258769 | orchestrator | neutron : Get container facts ------------------------------------------- 1.83s 2025-02-04 09:44:19.258778 | orchestrator | neutron : include_tasks ------------------------------------------------- 1.57s 2025-02-04 09:44:19.258786 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.55s 2025-02-04 09:44:19.258793 | orchestrator | neutron : Get container volume facts ------------------------------------ 1.44s 2025-02-04 09:44:19.258800 | orchestrator | neutron : Check for ML2/OVN presence ------------------------------------ 1.02s 2025-02-04 09:44:19.258808 | orchestrator | neutron : Check for ML2/OVS presence ------------------------------------ 0.88s 2025-02-04 09:44:19.258815 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-02-04 09:44:19.258823 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task 36188578-51d5-4b4f-9ce4-918615765364 is in state SUCCESS 2025-02-04 09:44:19.258834 | orchestrator | 2025-02-04 09:44:16 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:19.258846 | orchestrator | 2025-02-04 09:44:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:19.258867 | orchestrator | 2025-02-04 09:44:19 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:19.259110 | orchestrator | 2025-02-04 09:44:19 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:19.259131 | orchestrator | 2025-02-04 09:44:19 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:19.259761 | orchestrator | 2025-02-04 09:44:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:19.260424 | orchestrator | 2025-02-04 09:44:19 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:19.261159 | orchestrator | 2025-02-04 09:44:19 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:22.296758 | orchestrator | 2025-02-04 09:44:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:22.298081 | orchestrator | 2025-02-04 09:44:22 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:22.299292 | orchestrator | 2025-02-04 09:44:22 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:22.299330 | orchestrator | 2025-02-04 09:44:22 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:22.299353 | orchestrator | 2025-02-04 09:44:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:22.300108 | orchestrator | 2025-02-04 09:44:22 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:22.301384 | orchestrator | 2025-02-04 09:44:22 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:25.348134 | orchestrator | 2025-02-04 09:44:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:25.348282 | orchestrator | 2025-02-04 09:44:25 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:25.349807 | orchestrator | 2025-02-04 09:44:25 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:25.349848 | orchestrator | 2025-02-04 09:44:25 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:25.351060 | orchestrator | 2025-02-04 09:44:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:25.351882 | orchestrator | 2025-02-04 09:44:25 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:25.354073 | orchestrator | 2025-02-04 09:44:25 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:25.354112 | orchestrator | 2025-02-04 09:44:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:28.400492 | orchestrator | 2025-02-04 09:44:28 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:28.401236 | orchestrator | 2025-02-04 09:44:28 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:28.402537 | orchestrator | 2025-02-04 09:44:28 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:28.404225 | orchestrator | 2025-02-04 09:44:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:28.404732 | orchestrator | 2025-02-04 09:44:28 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:28.406084 | orchestrator | 2025-02-04 09:44:28 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:31.441691 | orchestrator | 2025-02-04 09:44:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:31.441877 | orchestrator | 2025-02-04 09:44:31 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:31.442622 | orchestrator | 2025-02-04 09:44:31 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:31.442697 | orchestrator | 2025-02-04 09:44:31 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:31.443720 | orchestrator | 2025-02-04 09:44:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:31.444502 | orchestrator | 2025-02-04 09:44:31 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:31.445345 | orchestrator | 2025-02-04 09:44:31 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state STARTED 2025-02-04 09:44:34.484873 | orchestrator | 2025-02-04 09:44:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:34.485070 | orchestrator | 2025-02-04 09:44:34 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:34.485247 | orchestrator | 2025-02-04 09:44:34 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:34.486079 | orchestrator | 2025-02-04 09:44:34 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:34.487068 | orchestrator | 2025-02-04 09:44:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:34.488018 | orchestrator | 2025-02-04 09:44:34 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:34.489379 | orchestrator | 2025-02-04 09:44:34 | INFO  | Task 1e4e6b48-8faa-40b8-8397-039e56dc3487 is in state SUCCESS 2025-02-04 09:44:34.489876 | orchestrator | 2025-02-04 09:44:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:34.491384 | orchestrator | 2025-02-04 09:44:34.491442 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-04 09:44:34.491457 | orchestrator | 2025-02-04 09:44:34.491469 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-02-04 09:44:34.491480 | orchestrator | 2025-02-04 09:44:34.491492 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-02-04 09:44:34.491504 | orchestrator | Tuesday 04 February 2025 09:44:01 +0000 (0:00:00.701) 0:00:00.701 ****** 2025-02-04 09:44:34.491516 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-02-04 09:44:34.491528 | orchestrator | 2025-02-04 09:44:34.491540 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-02-04 09:44:34.491552 | orchestrator | Tuesday 04 February 2025 09:44:01 +0000 (0:00:00.234) 0:00:00.936 ****** 2025-02-04 09:44:34.491564 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:44:34.491576 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-02-04 09:44:34.491588 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-02-04 09:44:34.491599 | orchestrator | 2025-02-04 09:44:34.491611 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-02-04 09:44:34.491623 | orchestrator | Tuesday 04 February 2025 09:44:02 +0000 (0:00:00.921) 0:00:01.858 ****** 2025-02-04 09:44:34.491634 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-02-04 09:44:34.491645 | orchestrator | 2025-02-04 09:44:34.491656 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-02-04 09:44:34.491668 | orchestrator | Tuesday 04 February 2025 09:44:02 +0000 (0:00:00.247) 0:00:02.106 ****** 2025-02-04 09:44:34.491679 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.491691 | orchestrator | 2025-02-04 09:44:34.491702 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-02-04 09:44:34.491714 | orchestrator | Tuesday 04 February 2025 09:44:03 +0000 (0:00:00.703) 0:00:02.809 ****** 2025-02-04 09:44:34.491725 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.491740 | orchestrator | 2025-02-04 09:44:34.491758 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-02-04 09:44:34.491775 | orchestrator | Tuesday 04 February 2025 09:44:03 +0000 (0:00:00.131) 0:00:02.941 ****** 2025-02-04 09:44:34.491793 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.491811 | orchestrator | 2025-02-04 09:44:34.491828 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-02-04 09:44:34.491847 | orchestrator | Tuesday 04 February 2025 09:44:03 +0000 (0:00:00.479) 0:00:03.420 ****** 2025-02-04 09:44:34.491867 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.491883 | orchestrator | 2025-02-04 09:44:34.491895 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-02-04 09:44:34.491906 | orchestrator | Tuesday 04 February 2025 09:44:03 +0000 (0:00:00.151) 0:00:03.572 ****** 2025-02-04 09:44:34.491917 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.491965 | orchestrator | 2025-02-04 09:44:34.492016 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-02-04 09:44:34.492030 | orchestrator | Tuesday 04 February 2025 09:44:04 +0000 (0:00:00.145) 0:00:03.717 ****** 2025-02-04 09:44:34.492044 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.492057 | orchestrator | 2025-02-04 09:44:34.492069 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-02-04 09:44:34.492082 | orchestrator | Tuesday 04 February 2025 09:44:04 +0000 (0:00:00.194) 0:00:03.912 ****** 2025-02-04 09:44:34.492095 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.492109 | orchestrator | 2025-02-04 09:44:34.492121 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-02-04 09:44:34.492132 | orchestrator | Tuesday 04 February 2025 09:44:04 +0000 (0:00:00.148) 0:00:04.060 ****** 2025-02-04 09:44:34.492143 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.492155 | orchestrator | 2025-02-04 09:44:34.492166 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-02-04 09:44:34.492178 | orchestrator | Tuesday 04 February 2025 09:44:04 +0000 (0:00:00.365) 0:00:04.425 ****** 2025-02-04 09:44:34.492189 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:44:34.492200 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:44:34.492212 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:44:34.492223 | orchestrator | 2025-02-04 09:44:34.492234 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-02-04 09:44:34.492246 | orchestrator | Tuesday 04 February 2025 09:44:05 +0000 (0:00:00.890) 0:00:05.315 ****** 2025-02-04 09:44:34.492257 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.492268 | orchestrator | 2025-02-04 09:44:34.492280 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-02-04 09:44:34.492292 | orchestrator | Tuesday 04 February 2025 09:44:05 +0000 (0:00:00.274) 0:00:05.590 ****** 2025-02-04 09:44:34.492303 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:44:34.492315 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:44:34.492326 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:44:34.492337 | orchestrator | 2025-02-04 09:44:34.492348 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-02-04 09:44:34.492359 | orchestrator | Tuesday 04 February 2025 09:44:08 +0000 (0:00:02.387) 0:00:07.977 ****** 2025-02-04 09:44:34.492370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:44:34.492382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:44:34.492393 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:44:34.492405 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.492418 | orchestrator | 2025-02-04 09:44:34.492437 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-02-04 09:44:34.492470 | orchestrator | Tuesday 04 February 2025 09:44:08 +0000 (0:00:00.475) 0:00:08.452 ****** 2025-02-04 09:44:34.492493 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-02-04 09:44:34.492515 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-02-04 09:44:34.492534 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-02-04 09:44:34.492553 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.492583 | orchestrator | 2025-02-04 09:44:34.492602 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-02-04 09:44:34.492617 | orchestrator | Tuesday 04 February 2025 09:44:09 +0000 (0:00:01.001) 0:00:09.454 ****** 2025-02-04 09:44:34.492635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:44:34.492652 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:44:34.492664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-02-04 09:44:34.492676 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.492688 | orchestrator | 2025-02-04 09:44:34.492699 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-02-04 09:44:34.492711 | orchestrator | Tuesday 04 February 2025 09:44:09 +0000 (0:00:00.196) 0:00:09.651 ****** 2025-02-04 09:44:34.492727 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '8efb218d8299', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-02-04 09:44:06.779328', 'end': '2025-02-04 09:44:06.819388', 'delta': '0:00:00.040060', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8efb218d8299'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-02-04 09:44:34.492742 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '7f40cdf735f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-02-04 09:44:07.370883', 'end': '2025-02-04 09:44:07.413235', 'delta': '0:00:00.042352', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7f40cdf735f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-02-04 09:44:34.492763 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '69fc3f27bad2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-02-04 09:44:08.073433', 'end': '2025-02-04 09:44:08.116414', 'delta': '0:00:00.042981', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['69fc3f27bad2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-02-04 09:44:34.492782 | orchestrator | 2025-02-04 09:44:34.492793 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-02-04 09:44:34.492805 | orchestrator | Tuesday 04 February 2025 09:44:10 +0000 (0:00:00.218) 0:00:09.870 ****** 2025-02-04 09:44:34.492817 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.492828 | orchestrator | 2025-02-04 09:44:34.492839 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-02-04 09:44:34.492851 | orchestrator | Tuesday 04 February 2025 09:44:10 +0000 (0:00:00.691) 0:00:10.561 ****** 2025-02-04 09:44:34.492862 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-02-04 09:44:34.492873 | orchestrator | 2025-02-04 09:44:34.492885 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-02-04 09:44:34.492896 | orchestrator | Tuesday 04 February 2025 09:44:13 +0000 (0:00:02.608) 0:00:13.170 ****** 2025-02-04 09:44:34.492907 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.492938 | orchestrator | 2025-02-04 09:44:34.492952 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-02-04 09:44:34.492963 | orchestrator | Tuesday 04 February 2025 09:44:13 +0000 (0:00:00.155) 0:00:13.325 ****** 2025-02-04 09:44:34.492974 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.492986 | orchestrator | 2025-02-04 09:44:34.493002 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-04 09:44:34.493014 | orchestrator | Tuesday 04 February 2025 09:44:13 +0000 (0:00:00.246) 0:00:13.571 ****** 2025-02-04 09:44:34.493025 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493041 | orchestrator | 2025-02-04 09:44:34.493053 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-02-04 09:44:34.493064 | orchestrator | Tuesday 04 February 2025 09:44:14 +0000 (0:00:00.132) 0:00:13.704 ****** 2025-02-04 09:44:34.493075 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.493087 | orchestrator | 2025-02-04 09:44:34.493098 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-02-04 09:44:34.493110 | orchestrator | Tuesday 04 February 2025 09:44:14 +0000 (0:00:00.190) 0:00:13.895 ****** 2025-02-04 09:44:34.493121 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493132 | orchestrator | 2025-02-04 09:44:34.493144 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-02-04 09:44:34.493155 | orchestrator | Tuesday 04 February 2025 09:44:14 +0000 (0:00:00.243) 0:00:14.138 ****** 2025-02-04 09:44:34.493166 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493178 | orchestrator | 2025-02-04 09:44:34.493189 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-02-04 09:44:34.493201 | orchestrator | Tuesday 04 February 2025 09:44:14 +0000 (0:00:00.131) 0:00:14.269 ****** 2025-02-04 09:44:34.493212 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493223 | orchestrator | 2025-02-04 09:44:34.493234 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-02-04 09:44:34.493246 | orchestrator | Tuesday 04 February 2025 09:44:14 +0000 (0:00:00.132) 0:00:14.402 ****** 2025-02-04 09:44:34.493257 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493268 | orchestrator | 2025-02-04 09:44:34.493280 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-02-04 09:44:34.493291 | orchestrator | Tuesday 04 February 2025 09:44:15 +0000 (0:00:00.374) 0:00:14.776 ****** 2025-02-04 09:44:34.493302 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493313 | orchestrator | 2025-02-04 09:44:34.493325 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-02-04 09:44:34.493336 | orchestrator | Tuesday 04 February 2025 09:44:15 +0000 (0:00:00.171) 0:00:14.947 ****** 2025-02-04 09:44:34.493348 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493359 | orchestrator | 2025-02-04 09:44:34.493370 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-02-04 09:44:34.493387 | orchestrator | Tuesday 04 February 2025 09:44:15 +0000 (0:00:00.163) 0:00:15.111 ****** 2025-02-04 09:44:34.493398 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493410 | orchestrator | 2025-02-04 09:44:34.493421 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-02-04 09:44:34.493433 | orchestrator | Tuesday 04 February 2025 09:44:15 +0000 (0:00:00.164) 0:00:15.276 ****** 2025-02-04 09:44:34.493444 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493455 | orchestrator | 2025-02-04 09:44:34.493466 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-02-04 09:44:34.493478 | orchestrator | Tuesday 04 February 2025 09:44:15 +0000 (0:00:00.135) 0:00:15.412 ****** 2025-02-04 09:44:34.493489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-02-04 09:44:34.493627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part1', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part14', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part15', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part16', 'scsi-SQEMU_QEMU_HARDDISK_08b5b955-2a19-469b-a49b-98bfe933a640-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:44:34.493649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_17d2e1d1-143d-4df8-b794-06a49264520c', 'scsi-SQEMU_QEMU_HARDDISK_17d2e1d1-143d-4df8-b794-06a49264520c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:44:34.493670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_212bd4e9-d9e4-4fb6-aa1a-c75a1354e796', 'scsi-SQEMU_QEMU_HARDDISK_212bd4e9-d9e4-4fb6-aa1a-c75a1354e796'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:44:34.493691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_84ce1e5e-93f1-4e17-9c74-c98d07335b49', 'scsi-SQEMU_QEMU_HARDDISK_84ce1e5e-93f1-4e17-9c74-c98d07335b49'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:44:34.493721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-02-04-08-43-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-02-04 09:44:34.493742 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493755 | orchestrator | 2025-02-04 09:44:34.493766 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-02-04 09:44:34.493778 | orchestrator | Tuesday 04 February 2025 09:44:16 +0000 (0:00:00.326) 0:00:15.738 ****** 2025-02-04 09:44:34.493789 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493801 | orchestrator | 2025-02-04 09:44:34.493812 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-02-04 09:44:34.493823 | orchestrator | Tuesday 04 February 2025 09:44:16 +0000 (0:00:00.290) 0:00:16.029 ****** 2025-02-04 09:44:34.493834 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493846 | orchestrator | 2025-02-04 09:44:34.493857 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-02-04 09:44:34.493870 | orchestrator | Tuesday 04 February 2025 09:44:16 +0000 (0:00:00.155) 0:00:16.184 ****** 2025-02-04 09:44:34.493889 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.493908 | orchestrator | 2025-02-04 09:44:34.493955 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-02-04 09:44:34.493969 | orchestrator | Tuesday 04 February 2025 09:44:16 +0000 (0:00:00.163) 0:00:16.348 ****** 2025-02-04 09:44:34.493986 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.493998 | orchestrator | 2025-02-04 09:44:34.494010 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-02-04 09:44:34.494069 | orchestrator | Tuesday 04 February 2025 09:44:17 +0000 (0:00:00.524) 0:00:16.872 ****** 2025-02-04 09:44:34.494091 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.494110 | orchestrator | 2025-02-04 09:44:34.494132 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-04 09:44:34.494152 | orchestrator | Tuesday 04 February 2025 09:44:17 +0000 (0:00:00.180) 0:00:17.053 ****** 2025-02-04 09:44:34.494164 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.494175 | orchestrator | 2025-02-04 09:44:34.494186 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-04 09:44:34.494198 | orchestrator | Tuesday 04 February 2025 09:44:17 +0000 (0:00:00.479) 0:00:17.532 ****** 2025-02-04 09:44:34.494209 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.494220 | orchestrator | 2025-02-04 09:44:34.494231 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-02-04 09:44:34.494243 | orchestrator | Tuesday 04 February 2025 09:44:18 +0000 (0:00:00.180) 0:00:17.713 ****** 2025-02-04 09:44:34.494254 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494265 | orchestrator | 2025-02-04 09:44:34.494276 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-02-04 09:44:34.494288 | orchestrator | Tuesday 04 February 2025 09:44:18 +0000 (0:00:00.303) 0:00:18.016 ****** 2025-02-04 09:44:34.494299 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494310 | orchestrator | 2025-02-04 09:44:34.494321 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-02-04 09:44:34.494332 | orchestrator | Tuesday 04 February 2025 09:44:18 +0000 (0:00:00.165) 0:00:18.181 ****** 2025-02-04 09:44:34.494344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:44:34.494363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:44:34.494385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:44:34.494397 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494408 | orchestrator | 2025-02-04 09:44:34.494420 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-02-04 09:44:34.494431 | orchestrator | Tuesday 04 February 2025 09:44:19 +0000 (0:00:00.549) 0:00:18.731 ****** 2025-02-04 09:44:34.494442 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:44:34.494453 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:44:34.494464 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:44:34.494476 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494487 | orchestrator | 2025-02-04 09:44:34.494498 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-02-04 09:44:34.494509 | orchestrator | Tuesday 04 February 2025 09:44:19 +0000 (0:00:00.497) 0:00:19.229 ****** 2025-02-04 09:44:34.494520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:44:34.494532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-02-04 09:44:34.494543 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-02-04 09:44:34.494554 | orchestrator | 2025-02-04 09:44:34.494565 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-02-04 09:44:34.494577 | orchestrator | Tuesday 04 February 2025 09:44:20 +0000 (0:00:01.444) 0:00:20.673 ****** 2025-02-04 09:44:34.494588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:44:34.494599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:44:34.494610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:44:34.494622 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494633 | orchestrator | 2025-02-04 09:44:34.494645 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-02-04 09:44:34.494656 | orchestrator | Tuesday 04 February 2025 09:44:21 +0000 (0:00:00.231) 0:00:20.905 ****** 2025-02-04 09:44:34.494667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-02-04 09:44:34.494679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-02-04 09:44:34.494690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-02-04 09:44:34.494701 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494714 | orchestrator | 2025-02-04 09:44:34.494732 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-02-04 09:44:34.494751 | orchestrator | Tuesday 04 February 2025 09:44:21 +0000 (0:00:00.235) 0:00:21.140 ****** 2025-02-04 09:44:34.494769 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-02-04 09:44:34.494788 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-02-04 09:44:34.494808 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-02-04 09:44:34.494828 | orchestrator | 2025-02-04 09:44:34.494847 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-02-04 09:44:34.494866 | orchestrator | Tuesday 04 February 2025 09:44:21 +0000 (0:00:00.440) 0:00:21.581 ****** 2025-02-04 09:44:34.494878 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494889 | orchestrator | 2025-02-04 09:44:34.494901 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-02-04 09:44:34.494912 | orchestrator | Tuesday 04 February 2025 09:44:22 +0000 (0:00:00.159) 0:00:21.740 ****** 2025-02-04 09:44:34.494972 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:44:34.494986 | orchestrator | 2025-02-04 09:44:34.494998 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-02-04 09:44:34.495009 | orchestrator | Tuesday 04 February 2025 09:44:22 +0000 (0:00:00.171) 0:00:21.912 ****** 2025-02-04 09:44:34.495020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:44:34.495047 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:44:34.495060 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:44:34.495071 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-04 09:44:34.495082 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-04 09:44:34.495099 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-04 09:44:34.495110 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-04 09:44:34.495122 | orchestrator | 2025-02-04 09:44:34.495133 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-02-04 09:44:34.495144 | orchestrator | Tuesday 04 February 2025 09:44:23 +0000 (0:00:00.990) 0:00:22.902 ****** 2025-02-04 09:44:34.495156 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-02-04 09:44:34.495167 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-02-04 09:44:34.495178 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-02-04 09:44:34.495189 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-02-04 09:44:34.495200 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-02-04 09:44:34.495212 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-02-04 09:44:34.495223 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-02-04 09:44:34.495234 | orchestrator | 2025-02-04 09:44:34.495246 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-02-04 09:44:34.495257 | orchestrator | Tuesday 04 February 2025 09:44:25 +0000 (0:00:02.026) 0:00:24.928 ****** 2025-02-04 09:44:34.495268 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:44:34.495280 | orchestrator | 2025-02-04 09:44:34.495291 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-02-04 09:44:34.495302 | orchestrator | Tuesday 04 February 2025 09:44:25 +0000 (0:00:00.514) 0:00:25.443 ****** 2025-02-04 09:44:34.495313 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:44:34.495324 | orchestrator | 2025-02-04 09:44:34.495336 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-02-04 09:44:34.495348 | orchestrator | Tuesday 04 February 2025 09:44:26 +0000 (0:00:00.803) 0:00:26.247 ****** 2025-02-04 09:44:34.495359 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-02-04 09:44:34.495370 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-02-04 09:44:34.495382 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-02-04 09:44:34.495393 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-02-04 09:44:34.495405 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-02-04 09:44:34.495416 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-02-04 09:44:34.495427 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-02-04 09:44:34.495438 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-02-04 09:44:34.495450 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-02-04 09:44:34.495461 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-02-04 09:44:34.495472 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-02-04 09:44:34.495483 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-02-04 09:44:34.495500 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-02-04 09:44:34.495510 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-02-04 09:44:34.495520 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-02-04 09:44:34.495530 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-02-04 09:44:34.495541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-02-04 09:44:34.495551 | orchestrator | 2025-02-04 09:44:34.495561 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:44:34.495572 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-02-04 09:44:34.495584 | orchestrator | 2025-02-04 09:44:34.495594 | orchestrator | 2025-02-04 09:44:34.495604 | orchestrator | 2025-02-04 09:44:34.495619 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:44:34.495629 | orchestrator | Tuesday 04 February 2025 09:44:32 +0000 (0:00:06.439) 0:00:32.686 ****** 2025-02-04 09:44:34.495640 | orchestrator | =============================================================================== 2025-02-04 09:44:34.495650 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.44s 2025-02-04 09:44:34.495661 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 2.61s 2025-02-04 09:44:34.495671 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.39s 2025-02-04 09:44:34.495685 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.03s 2025-02-04 09:44:37.537622 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.44s 2025-02-04 09:44:37.537746 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 1.00s 2025-02-04 09:44:37.537767 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.99s 2025-02-04 09:44:37.537782 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.92s 2025-02-04 09:44:37.537797 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.89s 2025-02-04 09:44:37.537811 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.80s 2025-02-04 09:44:37.537826 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.70s 2025-02-04 09:44:37.537840 | orchestrator | ceph-facts : set_fact _container_exec_cmd ------------------------------- 0.69s 2025-02-04 09:44:37.537854 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.55s 2025-02-04 09:44:37.537868 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.52s 2025-02-04 09:44:37.537882 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.51s 2025-02-04 09:44:37.537896 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.50s 2025-02-04 09:44:37.537910 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.48s 2025-02-04 09:44:37.537954 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.48s 2025-02-04 09:44:37.537970 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.48s 2025-02-04 09:44:37.537984 | orchestrator | ceph-facts : set_fact _current_monitor_address -------------------------- 0.44s 2025-02-04 09:44:37.538015 | orchestrator | 2025-02-04 09:44:37 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state STARTED 2025-02-04 09:44:37.538509 | orchestrator | 2025-02-04 09:44:37 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:37.540734 | orchestrator | 2025-02-04 09:44:37 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:37.542414 | orchestrator | 2025-02-04 09:44:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:37.544004 | orchestrator | 2025-02-04 09:44:37 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:40.595708 | orchestrator | 2025-02-04 09:44:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:40.595848 | orchestrator | 2025-02-04 09:44:40 | INFO  | Task e16d8d1c-5ed7-438a-8f5d-78419227f281 is in state SUCCESS 2025-02-04 09:44:40.598812 | orchestrator | 2025-02-04 09:44:40 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:40.601774 | orchestrator | 2025-02-04 09:44:40 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:40.603341 | orchestrator | 2025-02-04 09:44:40 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:40.605075 | orchestrator | 2025-02-04 09:44:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:40.607018 | orchestrator | 2025-02-04 09:44:40 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:43.660382 | orchestrator | 2025-02-04 09:44:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:43.768134 | orchestrator | 2025-02-04 09:44:43 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:46.704075 | orchestrator | 2025-02-04 09:44:43 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:46.704182 | orchestrator | 2025-02-04 09:44:43 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:46.704196 | orchestrator | 2025-02-04 09:44:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:46.704206 | orchestrator | 2025-02-04 09:44:43 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:46.704216 | orchestrator | 2025-02-04 09:44:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:46.704240 | orchestrator | 2025-02-04 09:44:46 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:46.705940 | orchestrator | 2025-02-04 09:44:46 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:46.706430 | orchestrator | 2025-02-04 09:44:46 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:46.706467 | orchestrator | 2025-02-04 09:44:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:46.706476 | orchestrator | 2025-02-04 09:44:46 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:46.706491 | orchestrator | 2025-02-04 09:44:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:49.748432 | orchestrator | 2025-02-04 09:44:49 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:49.748741 | orchestrator | 2025-02-04 09:44:49 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:49.750381 | orchestrator | 2025-02-04 09:44:49 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:49.751708 | orchestrator | 2025-02-04 09:44:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:49.753235 | orchestrator | 2025-02-04 09:44:49 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:52.810200 | orchestrator | 2025-02-04 09:44:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:52.810371 | orchestrator | 2025-02-04 09:44:52 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:52.813292 | orchestrator | 2025-02-04 09:44:52 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:52.814209 | orchestrator | 2025-02-04 09:44:52 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:52.818212 | orchestrator | 2025-02-04 09:44:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:52.820751 | orchestrator | 2025-02-04 09:44:52 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:55.862248 | orchestrator | 2025-02-04 09:44:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:55.862399 | orchestrator | 2025-02-04 09:44:55 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:55.862724 | orchestrator | 2025-02-04 09:44:55 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:55.864337 | orchestrator | 2025-02-04 09:44:55 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:55.865953 | orchestrator | 2025-02-04 09:44:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:55.867238 | orchestrator | 2025-02-04 09:44:55 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:44:58.896215 | orchestrator | 2025-02-04 09:44:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:44:58.896325 | orchestrator | 2025-02-04 09:44:58 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:44:58.897576 | orchestrator | 2025-02-04 09:44:58 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:44:58.897604 | orchestrator | 2025-02-04 09:44:58 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:44:58.898524 | orchestrator | 2025-02-04 09:44:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:44:58.899683 | orchestrator | 2025-02-04 09:44:58 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:01.945346 | orchestrator | 2025-02-04 09:44:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:01.945492 | orchestrator | 2025-02-04 09:45:01 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:01.946383 | orchestrator | 2025-02-04 09:45:01 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:01.946445 | orchestrator | 2025-02-04 09:45:01 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:01.947071 | orchestrator | 2025-02-04 09:45:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:01.948023 | orchestrator | 2025-02-04 09:45:01 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:04.982123 | orchestrator | 2025-02-04 09:45:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:04.982271 | orchestrator | 2025-02-04 09:45:04 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:04.983205 | orchestrator | 2025-02-04 09:45:04 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:04.984682 | orchestrator | 2025-02-04 09:45:04 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:04.985741 | orchestrator | 2025-02-04 09:45:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:04.987931 | orchestrator | 2025-02-04 09:45:04 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:08.029308 | orchestrator | 2025-02-04 09:45:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:08.029435 | orchestrator | 2025-02-04 09:45:08 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:08.030240 | orchestrator | 2025-02-04 09:45:08 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:08.030283 | orchestrator | 2025-02-04 09:45:08 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:08.031592 | orchestrator | 2025-02-04 09:45:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:08.032216 | orchestrator | 2025-02-04 09:45:08 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:11.063956 | orchestrator | 2025-02-04 09:45:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:11.064132 | orchestrator | 2025-02-04 09:45:11 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:11.065460 | orchestrator | 2025-02-04 09:45:11 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:11.065544 | orchestrator | 2025-02-04 09:45:11 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:11.066472 | orchestrator | 2025-02-04 09:45:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:11.067591 | orchestrator | 2025-02-04 09:45:11 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:14.096318 | orchestrator | 2025-02-04 09:45:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:14.096467 | orchestrator | 2025-02-04 09:45:14 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:14.097501 | orchestrator | 2025-02-04 09:45:14 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:14.100741 | orchestrator | 2025-02-04 09:45:14 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:14.104261 | orchestrator | 2025-02-04 09:45:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:14.105858 | orchestrator | 2025-02-04 09:45:14 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:17.134250 | orchestrator | 2025-02-04 09:45:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:17.134518 | orchestrator | 2025-02-04 09:45:17 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:17.135156 | orchestrator | 2025-02-04 09:45:17 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:17.135194 | orchestrator | 2025-02-04 09:45:17 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:17.135220 | orchestrator | 2025-02-04 09:45:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:17.136173 | orchestrator | 2025-02-04 09:45:17 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:20.168224 | orchestrator | 2025-02-04 09:45:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:20.168371 | orchestrator | 2025-02-04 09:45:20 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:20.169463 | orchestrator | 2025-02-04 09:45:20 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:20.170874 | orchestrator | 2025-02-04 09:45:20 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:20.172350 | orchestrator | 2025-02-04 09:45:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:20.173353 | orchestrator | 2025-02-04 09:45:20 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:23.207277 | orchestrator | 2025-02-04 09:45:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:23.207395 | orchestrator | 2025-02-04 09:45:23 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:23.210483 | orchestrator | 2025-02-04 09:45:23 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:23.210551 | orchestrator | 2025-02-04 09:45:23 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:23.212484 | orchestrator | 2025-02-04 09:45:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:23.213659 | orchestrator | 2025-02-04 09:45:23 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:26.265162 | orchestrator | 2025-02-04 09:45:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:26.265307 | orchestrator | 2025-02-04 09:45:26 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:26.265900 | orchestrator | 2025-02-04 09:45:26 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:26.267098 | orchestrator | 2025-02-04 09:45:26 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:26.268296 | orchestrator | 2025-02-04 09:45:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:26.269482 | orchestrator | 2025-02-04 09:45:26 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:29.314420 | orchestrator | 2025-02-04 09:45:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:29.314561 | orchestrator | 2025-02-04 09:45:29 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:29.314864 | orchestrator | 2025-02-04 09:45:29 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state STARTED 2025-02-04 09:45:29.314946 | orchestrator | 2025-02-04 09:45:29 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state STARTED 2025-02-04 09:45:29.317164 | orchestrator | 2025-02-04 09:45:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:29.317841 | orchestrator | 2025-02-04 09:45:29 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:32.360078 | orchestrator | 2025-02-04 09:45:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:32.360368 | orchestrator | 2025-02-04 09:45:32 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state STARTED 2025-02-04 09:45:32.360425 | orchestrator | 2025-02-04 09:45:32 | INFO  | Task b1466803-00f6-4928-8205-4c239efc3f91 is in state SUCCESS 2025-02-04 09:45:32.362099 | orchestrator | 2025-02-04 09:45:32.362168 | orchestrator | 2025-02-04 09:45:32.362196 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-02-04 09:45:32.362253 | orchestrator | 2025-02-04 09:45:32.362281 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-02-04 09:45:32.362307 | orchestrator | Tuesday 04 February 2025 09:43:50 +0000 (0:00:00.206) 0:00:00.207 ****** 2025-02-04 09:45:32.362331 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-02-04 09:45:32.362357 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-04 09:45:32.362383 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-04 09:45:32.362669 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-02-04 09:45:32.362738 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-04 09:45:32.362765 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-02-04 09:45:32.362788 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-02-04 09:45:32.362828 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-02-04 09:45:32.362852 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-02-04 09:45:32.362917 | orchestrator | 2025-02-04 09:45:32.362941 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-02-04 09:45:32.362964 | orchestrator | Tuesday 04 February 2025 09:43:53 +0000 (0:00:03.612) 0:00:03.819 ****** 2025-02-04 09:45:32.363105 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-02-04 09:45:32.363133 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-04 09:45:32.363157 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-04 09:45:32.363181 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-02-04 09:45:32.363203 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-02-04 09:45:32.363227 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-02-04 09:45:32.363251 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-02-04 09:45:32.363274 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-02-04 09:45:32.363298 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-02-04 09:45:32.363322 | orchestrator | 2025-02-04 09:45:32.363344 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-02-04 09:45:32.363369 | orchestrator | Tuesday 04 February 2025 09:43:54 +0000 (0:00:00.286) 0:00:04.106 ****** 2025-02-04 09:45:32.363393 | orchestrator | ok: [testbed-manager] => { 2025-02-04 09:45:32.363419 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-02-04 09:45:32.363444 | orchestrator | } 2025-02-04 09:45:32.363468 | orchestrator | 2025-02-04 09:45:32.363491 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-02-04 09:45:32.363514 | orchestrator | Tuesday 04 February 2025 09:43:54 +0000 (0:00:00.180) 0:00:04.286 ****** 2025-02-04 09:45:32.363539 | orchestrator | changed: [testbed-manager] 2025-02-04 09:45:32.363563 | orchestrator | 2025-02-04 09:45:32.363587 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-02-04 09:45:32.363609 | orchestrator | Tuesday 04 February 2025 09:44:33 +0000 (0:00:39.597) 0:00:43.884 ****** 2025-02-04 09:45:32.363635 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-02-04 09:45:32.363659 | orchestrator | 2025-02-04 09:45:32.363681 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-02-04 09:45:32.363704 | orchestrator | Tuesday 04 February 2025 09:44:34 +0000 (0:00:00.651) 0:00:44.535 ****** 2025-02-04 09:45:32.363729 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-02-04 09:45:32.363756 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-02-04 09:45:32.363780 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-02-04 09:45:32.363823 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-02-04 09:45:32.363848 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-02-04 09:45:32.363917 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-02-04 09:45:32.363951 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-02-04 09:45:32.363973 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-02-04 09:45:32.363987 | orchestrator | 2025-02-04 09:45:32.364000 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-02-04 09:45:32.364012 | orchestrator | Tuesday 04 February 2025 09:44:37 +0000 (0:00:03.108) 0:00:47.644 ****** 2025-02-04 09:45:32.364025 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:45:32.364038 | orchestrator | 2025-02-04 09:45:32.364058 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:45:32.364080 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:45:32.364101 | orchestrator | 2025-02-04 09:45:32.364122 | orchestrator | 2025-02-04 09:45:32.364143 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:45:32.364165 | orchestrator | Tuesday 04 February 2025 09:44:37 +0000 (0:00:00.028) 0:00:47.672 ****** 2025-02-04 09:45:32.364185 | orchestrator | =============================================================================== 2025-02-04 09:45:32.364206 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 39.60s 2025-02-04 09:45:32.364224 | orchestrator | Check ceph keys --------------------------------------------------------- 3.61s 2025-02-04 09:45:32.364243 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.11s 2025-02-04 09:45:32.364263 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.65s 2025-02-04 09:45:32.364406 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.29s 2025-02-04 09:45:32.364430 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.18s 2025-02-04 09:45:32.364443 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-02-04 09:45:32.364456 | orchestrator | 2025-02-04 09:45:32.364468 | orchestrator | 2025-02-04 09:45:32.364481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:45:32.364493 | orchestrator | 2025-02-04 09:45:32.364506 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:45:32.364518 | orchestrator | Tuesday 04 February 2025 09:44:09 +0000 (0:00:00.438) 0:00:00.438 ****** 2025-02-04 09:45:32.364531 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:45:32.364544 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:45:32.364556 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:45:32.364569 | orchestrator | 2025-02-04 09:45:32.364582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:45:32.364595 | orchestrator | Tuesday 04 February 2025 09:44:09 +0000 (0:00:00.487) 0:00:00.926 ****** 2025-02-04 09:45:32.364607 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-02-04 09:45:32.364620 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-02-04 09:45:32.364633 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-02-04 09:45:32.364666 | orchestrator | 2025-02-04 09:45:32.364679 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-02-04 09:45:32.364692 | orchestrator | 2025-02-04 09:45:32.364705 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-02-04 09:45:32.364717 | orchestrator | Tuesday 04 February 2025 09:44:10 +0000 (0:00:00.576) 0:00:01.502 ****** 2025-02-04 09:45:32.364730 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:45:32.364744 | orchestrator | 2025-02-04 09:45:32.364756 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-02-04 09:45:32.364769 | orchestrator | Tuesday 04 February 2025 09:44:11 +0000 (0:00:00.754) 0:00:02.257 ****** 2025-02-04 09:45:32.364781 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (5 retries left). 2025-02-04 09:45:32.364794 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (4 retries left). 2025-02-04 09:45:32.364807 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (3 retries left). 2025-02-04 09:45:32.364820 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (2 retries left). 2025-02-04 09:45:32.364832 | orchestrator | FAILED - RETRYING: [testbed-node-0]: placement | Creating services (1 retries left). 2025-02-04 09:45:32.364939 | orchestrator | failed: [testbed-node-0] (item=placement (placement)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Placement Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:8780"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:8780"}], "name": "placement", "type": "placement"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662326.0878518-4750-118993653524553/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662326.0878518-4750-118993653524553/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662326.0878518-4750-118993653524553/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_yj2k7vt2/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_yj2k7vt2/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_yj2k7vt2/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_yj2k7vt2/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_yj2k7vt2/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:45:32.364975 | orchestrator | 2025-02-04 09:45:32.364988 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:45:32.365001 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:45:32.365015 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:45:32.365031 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:45:32.365045 | orchestrator | 2025-02-04 09:45:32.365059 | orchestrator | 2025-02-04 09:45:32.365073 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:45:32.365087 | orchestrator | Tuesday 04 February 2025 09:45:30 +0000 (0:01:19.256) 0:01:21.514 ****** 2025-02-04 09:45:32.365107 | orchestrator | =============================================================================== 2025-02-04 09:45:32.365121 | orchestrator | service-ks-register : placement | Creating services -------------------- 79.26s 2025-02-04 09:45:32.365136 | orchestrator | placement : include_tasks ----------------------------------------------- 0.75s 2025-02-04 09:45:32.365150 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-02-04 09:45:32.365165 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-02-04 09:45:32.365179 | orchestrator | 2025-02-04 09:45:32 | INFO  | Task 8e3f308d-a7ec-463b-8835-df7ef51fd4a3 is in state SUCCESS 2025-02-04 09:45:32.365193 | orchestrator | 2025-02-04 09:45:32.365207 | orchestrator | 2025-02-04 09:45:32.365220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:45:32.365235 | orchestrator | 2025-02-04 09:45:32.365249 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:45:32.365264 | orchestrator | Tuesday 04 February 2025 09:44:09 +0000 (0:00:00.406) 0:00:00.406 ****** 2025-02-04 09:45:32.365279 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:45:32.365293 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:45:32.365307 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:45:32.365321 | orchestrator | 2025-02-04 09:45:32.365336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:45:32.365351 | orchestrator | Tuesday 04 February 2025 09:44:09 +0000 (0:00:00.455) 0:00:00.862 ****** 2025-02-04 09:45:32.365366 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-02-04 09:45:32.365380 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-02-04 09:45:32.365393 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-02-04 09:45:32.365406 | orchestrator | 2025-02-04 09:45:32.365418 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-02-04 09:45:32.365431 | orchestrator | 2025-02-04 09:45:32.365520 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-02-04 09:45:32.365533 | orchestrator | Tuesday 04 February 2025 09:44:10 +0000 (0:00:00.377) 0:00:01.239 ****** 2025-02-04 09:45:32.365615 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:45:32.365628 | orchestrator | 2025-02-04 09:45:32.365641 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-02-04 09:45:32.365654 | orchestrator | Tuesday 04 February 2025 09:44:11 +0000 (0:00:00.992) 0:00:02.232 ****** 2025-02-04 09:45:32.365666 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (5 retries left). 2025-02-04 09:45:32.365686 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (4 retries left). 2025-02-04 09:45:32.365698 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (3 retries left). 2025-02-04 09:45:32.365711 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (2 retries left). 2025-02-04 09:45:32.365723 | orchestrator | FAILED - RETRYING: [testbed-node-0]: magnum | Creating services (1 retries left). 2025-02-04 09:45:32.365767 | orchestrator | failed: [testbed-node-0] (item=magnum (container-infra)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Container Infrastructure Management Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9511/v1"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9511/v1"}], "name": "magnum", "type": "container-infra"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662326.086837-4751-69159329295958/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662326.086837-4751-69159329295958/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662326.086837-4751-69159329295958/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_0b3430ke/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_0b3430ke/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_0b3430ke/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_0b3430ke/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_0b3430ke/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:45:35.394975 | orchestrator | 2025-02-04 09:45:35.395100 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:45:35.395122 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:45:35.395141 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:45:35.395158 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:45:35.395172 | orchestrator | 2025-02-04 09:45:35.395187 | orchestrator | 2025-02-04 09:45:35.395203 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:45:35.395217 | orchestrator | Tuesday 04 February 2025 09:45:30 +0000 (0:01:19.243) 0:01:21.476 ****** 2025-02-04 09:45:35.395232 | orchestrator | =============================================================================== 2025-02-04 09:45:35.395381 | orchestrator | service-ks-register : magnum | Creating services ----------------------- 79.24s 2025-02-04 09:45:35.395397 | orchestrator | magnum : include_tasks -------------------------------------------------- 0.99s 2025-02-04 09:45:35.395412 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-02-04 09:45:35.395426 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-02-04 09:45:35.395441 | orchestrator | 2025-02-04 09:45:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:35.395455 | orchestrator | 2025-02-04 09:45:32 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:35.395470 | orchestrator | 2025-02-04 09:45:32 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:35.395484 | orchestrator | 2025-02-04 09:45:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:35.395515 | orchestrator | 2025-02-04 09:45:35 | INFO  | Task db1743f3-9405-45a9-9bef-c38a6d9b7dfc is in state SUCCESS 2025-02-04 09:45:35.398111 | orchestrator | 2025-02-04 09:45:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:35.398148 | orchestrator | 2025-02-04 09:45:35 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:35.399701 | orchestrator | 2025-02-04 09:45:35 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:35.402288 | orchestrator | 2025-02-04 09:45:35 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:35.404476 | orchestrator | 2025-02-04 09:45:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:35.405742 | orchestrator | 2025-02-04 09:45:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:38.438170 | orchestrator | 2025-02-04 09:45:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:38.439111 | orchestrator | 2025-02-04 09:45:38 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:38.439647 | orchestrator | 2025-02-04 09:45:38 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:38.440369 | orchestrator | 2025-02-04 09:45:38 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:38.441320 | orchestrator | 2025-02-04 09:45:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:41.479190 | orchestrator | 2025-02-04 09:45:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:41.479336 | orchestrator | 2025-02-04 09:45:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:41.480233 | orchestrator | 2025-02-04 09:45:41 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:41.480274 | orchestrator | 2025-02-04 09:45:41 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:41.481492 | orchestrator | 2025-02-04 09:45:41 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:41.482281 | orchestrator | 2025-02-04 09:45:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:41.482408 | orchestrator | 2025-02-04 09:45:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:44.509197 | orchestrator | 2025-02-04 09:45:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:44.509851 | orchestrator | 2025-02-04 09:45:44 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:44.510000 | orchestrator | 2025-02-04 09:45:44 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:44.510216 | orchestrator | 2025-02-04 09:45:44 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:44.510252 | orchestrator | 2025-02-04 09:45:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:47.543182 | orchestrator | 2025-02-04 09:45:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:47.543291 | orchestrator | 2025-02-04 09:45:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:47.544114 | orchestrator | 2025-02-04 09:45:47 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:47.545890 | orchestrator | 2025-02-04 09:45:47 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:47.547369 | orchestrator | 2025-02-04 09:45:47 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:47.549020 | orchestrator | 2025-02-04 09:45:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:47.549059 | orchestrator | 2025-02-04 09:45:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:50.589499 | orchestrator | 2025-02-04 09:45:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:50.589799 | orchestrator | 2025-02-04 09:45:50 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:50.590256 | orchestrator | 2025-02-04 09:45:50 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:50.591390 | orchestrator | 2025-02-04 09:45:50 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:50.591814 | orchestrator | 2025-02-04 09:45:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:53.634263 | orchestrator | 2025-02-04 09:45:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:53.634423 | orchestrator | 2025-02-04 09:45:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:53.634899 | orchestrator | 2025-02-04 09:45:53 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:53.636411 | orchestrator | 2025-02-04 09:45:53 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:53.637453 | orchestrator | 2025-02-04 09:45:53 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:53.639345 | orchestrator | 2025-02-04 09:45:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:56.683267 | orchestrator | 2025-02-04 09:45:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:56.683390 | orchestrator | 2025-02-04 09:45:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:56.683983 | orchestrator | 2025-02-04 09:45:56 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:56.684814 | orchestrator | 2025-02-04 09:45:56 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:56.686316 | orchestrator | 2025-02-04 09:45:56 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:56.687603 | orchestrator | 2025-02-04 09:45:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:45:56.687705 | orchestrator | 2025-02-04 09:45:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:45:59.721445 | orchestrator | 2025-02-04 09:45:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:45:59.721673 | orchestrator | 2025-02-04 09:45:59 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state STARTED 2025-02-04 09:45:59.722358 | orchestrator | 2025-02-04 09:45:59 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:45:59.723146 | orchestrator | 2025-02-04 09:45:59 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:45:59.723752 | orchestrator | 2025-02-04 09:45:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:02.771983 | orchestrator | 2025-02-04 09:45:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:02.772137 | orchestrator | 2025-02-04 09:46:02 | INFO  | Task 86c7895a-6d9f-46e8-be7a-115fd5eafb60 is in state STARTED 2025-02-04 09:46:02.773204 | orchestrator | 2025-02-04 09:46:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:02.774891 | orchestrator | 2025-02-04 09:46:02 | INFO  | Task 6347eb1d-c33e-4298-8976-cc38622a693c is in state SUCCESS 2025-02-04 09:46:02.776427 | orchestrator | 2025-02-04 09:46:02.776468 | orchestrator | 2025-02-04 09:46:02.776480 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-02-04 09:46:02.776492 | orchestrator | 2025-02-04 09:46:02.776502 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-02-04 09:46:02.776513 | orchestrator | Tuesday 04 February 2025 09:44:42 +0000 (0:00:00.276) 0:00:00.277 ****** 2025-02-04 09:46:02.776524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-02-04 09:46:02.776536 | orchestrator | 2025-02-04 09:46:02.776546 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-02-04 09:46:02.776557 | orchestrator | Tuesday 04 February 2025 09:44:42 +0000 (0:00:00.391) 0:00:00.668 ****** 2025-02-04 09:46:02.776568 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-02-04 09:46:02.776601 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-02-04 09:46:02.776612 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-02-04 09:46:02.776623 | orchestrator | 2025-02-04 09:46:02.776633 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-02-04 09:46:02.776644 | orchestrator | Tuesday 04 February 2025 09:44:44 +0000 (0:00:01.383) 0:00:02.052 ****** 2025-02-04 09:46:02.776655 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-02-04 09:46:02.776665 | orchestrator | 2025-02-04 09:46:02.776676 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-02-04 09:46:02.776686 | orchestrator | Tuesday 04 February 2025 09:44:45 +0000 (0:00:01.299) 0:00:03.351 ****** 2025-02-04 09:46:02.776696 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:02.776707 | orchestrator | 2025-02-04 09:46:02.776718 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-02-04 09:46:02.776728 | orchestrator | Tuesday 04 February 2025 09:44:46 +0000 (0:00:01.089) 0:00:04.441 ****** 2025-02-04 09:46:02.776738 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:02.776749 | orchestrator | 2025-02-04 09:46:02.776759 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-02-04 09:46:02.776772 | orchestrator | Tuesday 04 February 2025 09:44:47 +0000 (0:00:00.953) 0:00:05.394 ****** 2025-02-04 09:46:02.776791 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-02-04 09:46:02.776809 | orchestrator | ok: [testbed-manager] 2025-02-04 09:46:02.776826 | orchestrator | 2025-02-04 09:46:02.776900 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-02-04 09:46:02.776920 | orchestrator | Tuesday 04 February 2025 09:45:21 +0000 (0:00:33.579) 0:00:38.973 ****** 2025-02-04 09:46:02.776937 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-02-04 09:46:02.776954 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-02-04 09:46:02.776965 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-02-04 09:46:02.776976 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-02-04 09:46:02.776986 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-02-04 09:46:02.776996 | orchestrator | 2025-02-04 09:46:02.777007 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-02-04 09:46:02.777017 | orchestrator | Tuesday 04 February 2025 09:45:25 +0000 (0:00:04.467) 0:00:43.441 ****** 2025-02-04 09:46:02.777043 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-02-04 09:46:02.777056 | orchestrator | 2025-02-04 09:46:02.777067 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-02-04 09:46:02.777080 | orchestrator | Tuesday 04 February 2025 09:45:26 +0000 (0:00:00.504) 0:00:43.945 ****** 2025-02-04 09:46:02.777091 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:46:02.777103 | orchestrator | 2025-02-04 09:46:02.777119 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-02-04 09:46:02.777132 | orchestrator | Tuesday 04 February 2025 09:45:26 +0000 (0:00:00.156) 0:00:44.102 ****** 2025-02-04 09:46:02.777144 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:46:02.777154 | orchestrator | 2025-02-04 09:46:02.777164 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-02-04 09:46:02.777175 | orchestrator | Tuesday 04 February 2025 09:45:26 +0000 (0:00:00.479) 0:00:44.582 ****** 2025-02-04 09:46:02.777185 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:02.777196 | orchestrator | 2025-02-04 09:46:02.777206 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-02-04 09:46:02.777216 | orchestrator | Tuesday 04 February 2025 09:45:29 +0000 (0:00:02.944) 0:00:47.526 ****** 2025-02-04 09:46:02.777226 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:02.777242 | orchestrator | 2025-02-04 09:46:02.777252 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-02-04 09:46:02.777271 | orchestrator | Tuesday 04 February 2025 09:45:30 +0000 (0:00:00.937) 0:00:48.463 ****** 2025-02-04 09:46:02.777281 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:02.777292 | orchestrator | 2025-02-04 09:46:02.777302 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-02-04 09:46:02.777312 | orchestrator | Tuesday 04 February 2025 09:45:31 +0000 (0:00:00.578) 0:00:49.041 ****** 2025-02-04 09:46:02.777323 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-02-04 09:46:02.777333 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-02-04 09:46:02.777344 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-02-04 09:46:02.777354 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-02-04 09:46:02.777365 | orchestrator | 2025-02-04 09:46:02.777375 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:46:02.777388 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-02-04 09:46:02.777408 | orchestrator | 2025-02-04 09:46:02.777425 | orchestrator | 2025-02-04 09:46:02.777453 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:46:02.777470 | orchestrator | Tuesday 04 February 2025 09:45:32 +0000 (0:00:01.313) 0:00:50.355 ****** 2025-02-04 09:46:02.777487 | orchestrator | =============================================================================== 2025-02-04 09:46:02.777503 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 33.58s 2025-02-04 09:46:02.777521 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.47s 2025-02-04 09:46:02.777649 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.94s 2025-02-04 09:46:02.777666 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.38s 2025-02-04 09:46:02.777676 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.32s 2025-02-04 09:46:02.777687 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.30s 2025-02-04 09:46:02.777697 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.09s 2025-02-04 09:46:02.777707 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2025-02-04 09:46:02.777718 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.94s 2025-02-04 09:46:02.777728 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-02-04 09:46:02.777738 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2025-02-04 09:46:02.777749 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.48s 2025-02-04 09:46:02.777759 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.39s 2025-02-04 09:46:02.777769 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2025-02-04 09:46:02.777779 | orchestrator | 2025-02-04 09:46:02.777790 | orchestrator | 2025-02-04 09:46:02.777800 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-02-04 09:46:02.777810 | orchestrator | 2025-02-04 09:46:02.777821 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-02-04 09:46:02.777831 | orchestrator | Tuesday 04 February 2025 09:42:44 +0000 (0:00:00.118) 0:00:00.118 ****** 2025-02-04 09:46:02.777864 | orchestrator | changed: [localhost] 2025-02-04 09:46:02.777882 | orchestrator | 2025-02-04 09:46:02.777892 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-02-04 09:46:02.777909 | orchestrator | Tuesday 04 February 2025 09:42:45 +0000 (0:00:01.039) 0:00:01.158 ****** 2025-02-04 09:46:02.777919 | orchestrator | changed: [localhost] 2025-02-04 09:46:02.777933 | orchestrator | 2025-02-04 09:46:02.777950 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-02-04 09:46:02.777967 | orchestrator | Tuesday 04 February 2025 09:43:16 +0000 (0:00:31.281) 0:00:32.439 ****** 2025-02-04 09:46:02.777984 | orchestrator | changed: [localhost] 2025-02-04 09:46:02.778012 | orchestrator | 2025-02-04 09:46:02.778082 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:46:02.778100 | orchestrator | 2025-02-04 09:46:02.778116 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:46:02.778132 | orchestrator | Tuesday 04 February 2025 09:43:20 +0000 (0:00:03.833) 0:00:36.273 ****** 2025-02-04 09:46:02.778148 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:46:02.778165 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:46:02.778183 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:46:02.778206 | orchestrator | 2025-02-04 09:46:02.778224 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:46:02.778240 | orchestrator | Tuesday 04 February 2025 09:43:20 +0000 (0:00:00.435) 0:00:36.708 ****** 2025-02-04 09:46:02.778258 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_True) 2025-02-04 09:46:02.778275 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_True) 2025-02-04 09:46:02.778292 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_True) 2025-02-04 09:46:02.778309 | orchestrator | 2025-02-04 09:46:02.778327 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-02-04 09:46:02.778344 | orchestrator | 2025-02-04 09:46:02.778362 | orchestrator | TASK [ironic : include_tasks] ************************************************** 2025-02-04 09:46:02.778379 | orchestrator | Tuesday 04 February 2025 09:43:21 +0000 (0:00:00.736) 0:00:37.444 ****** 2025-02-04 09:46:02.778394 | orchestrator | included: /ansible/roles/ironic/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:46:02.778406 | orchestrator | 2025-02-04 09:46:02.778416 | orchestrator | TASK [service-ks-register : ironic | Creating services] ************************ 2025-02-04 09:46:02.778427 | orchestrator | Tuesday 04 February 2025 09:43:22 +0000 (0:00:00.750) 0:00:38.195 ****** 2025-02-04 09:46:02.778437 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (5 retries left). 2025-02-04 09:46:02.778448 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (4 retries left). 2025-02-04 09:46:02.778458 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (3 retries left). 2025-02-04 09:46:02.778468 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (2 retries left). 2025-02-04 09:46:02.778479 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (1 retries left). 2025-02-04 09:46:02.778529 | orchestrator | failed: [testbed-node-0] (item=ironic (baremetal)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Ironic baremetal provisioning service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:6385"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:6385"}], "name": "ironic", "type": "baremetal"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662277.8575206-4386-15254620828444/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662277.8575206-4386-15254620828444/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662277.8575206-4386-15254620828444/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_dbf4jx0x/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_dbf4jx0x/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_dbf4jx0x/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_dbf4jx0x/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_dbf4jx0x/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:46:02.778566 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (5 retries left). 2025-02-04 09:46:02.778577 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (4 retries left). 2025-02-04 09:46:02.778587 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (3 retries left). 2025-02-04 09:46:02.778597 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (2 retries left). 2025-02-04 09:46:02.778608 | orchestrator | 2025-02-04 09:46:02.778618 | orchestrator | STILL ALIVE [task 'service-ks-register : ironic | Creating services' is running] *** 2025-02-04 09:46:02.778628 | orchestrator | FAILED - RETRYING: [testbed-node-0]: ironic | Creating services (1 retries left). 2025-02-04 09:46:02.778656 | orchestrator | failed: [testbed-node-0] (item=ironic-inspector (baremetal-introspection)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Ironic Inspector baremetal introspection service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:5050"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:5050"}], "name": "ironic-inspector", "type": "baremetal-introspection"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662356.674865-4929-258845837599775/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662356.674865-4929-258845837599775/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662356.674865-4929-258845837599775/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_av285yl_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_av285yl_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_av285yl_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_av285yl_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_av285yl_/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:46:02.778680 | orchestrator | 2025-02-04 09:46:02.778690 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:46:02.778701 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:46:02.778712 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:02.778723 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:46:02.778738 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:46:02.779512 | orchestrator | 2025-02-04 09:46:02.779550 | orchestrator | 2025-02-04 09:46:02.779568 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:46:02.779585 | orchestrator | Tuesday 04 February 2025 09:45:59 +0000 (0:02:37.627) 0:03:15.822 ****** 2025-02-04 09:46:02.779601 | orchestrator | =============================================================================== 2025-02-04 09:46:02.779617 | orchestrator | service-ks-register : ironic | Creating services ---------------------- 157.63s 2025-02-04 09:46:02.779636 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.28s 2025-02-04 09:46:02.779653 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.83s 2025-02-04 09:46:02.779687 | orchestrator | Ensure the destination directory exists --------------------------------- 1.04s 2025-02-04 09:46:02.779721 | orchestrator | ironic : include_tasks -------------------------------------------------- 0.75s 2025-02-04 09:46:02.779740 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-02-04 09:46:02.779757 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-02-04 09:46:02.779769 | orchestrator | 2025-02-04 09:46:02 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:02.779780 | orchestrator | 2025-02-04 09:46:02 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:46:02.779798 | orchestrator | 2025-02-04 09:46:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:05.826502 | orchestrator | 2025-02-04 09:46:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:05.826677 | orchestrator | 2025-02-04 09:46:05 | INFO  | Task 86c7895a-6d9f-46e8-be7a-115fd5eafb60 is in state STARTED 2025-02-04 09:46:05.829305 | orchestrator | 2025-02-04 09:46:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:05.832023 | orchestrator | 2025-02-04 09:46:05 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:05.834097 | orchestrator | 2025-02-04 09:46:05 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:46:05.837144 | orchestrator | 2025-02-04 09:46:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:05.837405 | orchestrator | 2025-02-04 09:46:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:08.868499 | orchestrator | 2025-02-04 09:46:08 | INFO  | Task 86c7895a-6d9f-46e8-be7a-115fd5eafb60 is in state STARTED 2025-02-04 09:46:08.873515 | orchestrator | 2025-02-04 09:46:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:08.874283 | orchestrator | 2025-02-04 09:46:08 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:08.874999 | orchestrator | 2025-02-04 09:46:08 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state STARTED 2025-02-04 09:46:08.875524 | orchestrator | 2025-02-04 09:46:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:08.875630 | orchestrator | 2025-02-04 09:46:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:11.913132 | orchestrator | 2025-02-04 09:46:11 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:11.914362 | orchestrator | 2025-02-04 09:46:11 | INFO  | Task 86c7895a-6d9f-46e8-be7a-115fd5eafb60 is in state SUCCESS 2025-02-04 09:46:11.916400 | orchestrator | 2025-02-04 09:46:11.916446 | orchestrator | 2025-02-04 09:46:11.916461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:46:11.916477 | orchestrator | 2025-02-04 09:46:11.916492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:46:11.916506 | orchestrator | Tuesday 04 February 2025 09:46:04 +0000 (0:00:00.392) 0:00:00.392 ****** 2025-02-04 09:46:11.916521 | orchestrator | ok: [testbed-manager] 2025-02-04 09:46:11.916536 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:46:11.916551 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:46:11.916565 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:46:11.916579 | orchestrator | ok: [testbed-node-3] 2025-02-04 09:46:11.916593 | orchestrator | ok: [testbed-node-4] 2025-02-04 09:46:11.916607 | orchestrator | ok: [testbed-node-5] 2025-02-04 09:46:11.916622 | orchestrator | 2025-02-04 09:46:11.916636 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:46:11.916951 | orchestrator | Tuesday 04 February 2025 09:46:05 +0000 (0:00:01.168) 0:00:01.561 ****** 2025-02-04 09:46:11.916977 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-02-04 09:46:11.916992 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-02-04 09:46:11.917007 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-02-04 09:46:11.917021 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-02-04 09:46:11.917036 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-02-04 09:46:11.917050 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-02-04 09:46:11.917064 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-02-04 09:46:11.917079 | orchestrator | 2025-02-04 09:46:11.917093 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-02-04 09:46:11.917107 | orchestrator | 2025-02-04 09:46:11.917121 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-02-04 09:46:11.917135 | orchestrator | Tuesday 04 February 2025 09:46:06 +0000 (0:00:01.346) 0:00:02.908 ****** 2025-02-04 09:46:11.917150 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-02-04 09:46:11.917165 | orchestrator | 2025-02-04 09:46:11.917180 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-02-04 09:46:11.917194 | orchestrator | Tuesday 04 February 2025 09:46:07 +0000 (0:00:01.333) 0:00:04.241 ****** 2025-02-04 09:46:11.917242 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917393 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917439 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917480 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917524 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917570 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917611 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"msg": "{'prometheus-server': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': '{{ enable_prometheus_server | bool }}', 'image': '{{ prometheus_server_image_full }}', 'volumes': '{{ prometheus_server_default_volumes + prometheus_server_extra_volumes }}', 'dimensions': '{{ prometheus_server_dimensions }}', 'haproxy': {'prometheus_server': {'enabled': '{{ enable_prometheus_server | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}, 'prometheus_server_external': {'enabled': '{{ enable_prometheus_server_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_external_fqdn }}', 'port': '{{ prometheus_public_port }}', 'listen_port': '{{ prometheus_listen_port }}', 'active_passive': '{{ prometheus_active_passive | bool }}'}}}, 'prometheus-node-exporter': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': '{{ enable_prometheus_node_exporter | bool }}', 'image': '{{ prometheus_node_exporter_image_full }}', 'pid_mode': 'host', 'volumes': '{{ prometheus_node_exporter_default_volumes + prometheus_node_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_node_exporter_dimensions }}'}, 'prometheus-mysqld-exporter': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': '{{ enable_prometheus_mysqld_exporter | bool }}', 'image': '{{ prometheus_mysqld_exporter_image_full }}', 'volumes': '{{ prometheus_mysqld_exporter_default_volumes + prometheus_mysqld_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_mysqld_exporter_dimensions }}'}, 'prometheus-memcached-exporter': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': '{{ enable_prometheus_memcached_exporter | bool }}', 'image': '{{ prometheus_memcached_exporter_image_full }}', 'volumes': '{{ prometheus_memcached_exporter_default_volumes + prometheus_memcached_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_memcached_exporter_dimensions }}'}, 'prometheus-cadvisor': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': '{{ enable_prometheus_cadvisor | bool }}', 'image': '{{ prometheus_cadvisor_image_full }}', 'volumes': '{{ prometheus_cadvisor_default_volumes + prometheus_cadvisor_extra_volumes }}', 'dimensions': '{{ prometheus_cadvisor_dimensions }}'}, 'prometheus-alertmanager': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'image': '{{ prometheus_alertmanager_image_full }}', 'volumes': '{{ prometheus_alertmanager_default_volumes + prometheus_alertmanager_extra_volumes }}', 'dimensions': '{{ prometheus_alertmanager_dimensions }}', 'haproxy': {'prometheus_alertmanager': {'enabled': '{{ enable_prometheus_alertmanager | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_alertmanager_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}, 'prometheus_alertmanager_external': {'enabled': '{{ enable_prometheus_alertmanager_external | bool }}', 'mode': 'http', 'external': True, 'external_fqdn': '{{ prometheus_alertmanager_external_fqdn }}', 'port': '{{ prometheus_alertmanager_public_port }}', 'listen_port': '{{ prometheus_alertmanager_listen_port }}', 'auth_user': '{{ prometheus_alertmanager_user }}', 'auth_pass': '{{ prometheus_alertmanager_password }}', 'active_passive': '{{ prometheus_alertmanager_active_passive | bool }}'}}}, 'prometheus-openstack-exporter': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'environment': {'OS_COMPUTE_API_VERSION': '{{ prometheus_openstack_exporter_compute_api_version }}'}, 'image': '{{ prometheus_openstack_exporter_image_full }}', 'volumes': '{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_openstack_exporter_dimensions }}', 'haproxy': {'prometheus_openstack_exporter': {'enabled': '{{ enable_prometheus_openstack_exporter | bool }}', 'mode': 'http', 'external': False, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}, 'prometheus_openstack_exporter_external': {'enabled': '{{ enable_prometheus_openstack_exporter_external | bool }}', 'mode': 'http', 'external': True, 'port': '{{ prometheus_openstack_exporter_port }}', 'backend_http_extra': ['timeout server {{ prometheus_openstack_exporter_timeout }}']}}}, 'prometheus-elasticsearch-exporter': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': '{{ enable_prometheus_elasticsearch_exporter | bool }}', 'image': '{{ prometheus_elasticsearch_exporter_image_full }}', 'volumes': '{{ prometheus_elasticsearch_exporter_default_volumes + prometheus_elasticsearch_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_elasticsearch_exporter_dimensions }}'}, 'prometheus-blackbox-exporter': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': '{{ enable_prometheus_blackbox_exporter | bool }}', 'image': '{{ prometheus_blackbox_exporter_image_full }}', 'volumes': '{{ prometheus_blackbox_exporter_default_volumes + prometheus_blackbox_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_blackbox_exporter_dimensions }}'}, 'prometheus-libvirt-exporter': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': '{{ enable_prometheus_libvirt_exporter | bool }}', 'image': '{{ prometheus_libvirt_exporter_image_full }}', 'volumes': '{{ prometheus_libvirt_exporter_default_volumes + prometheus_libvirt_exporter_extra_volumes }}', 'dimensions': '{{ prometheus_libvirt_exporter_dimensions }}'}, 'prometheus-msteams': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': '{{ enable_prometheus_msteams | bool }}', 'environment': '{{ prometheus_msteams_container_proxy }}', 'image': '{{ prometheus_msteams_image_full }}', 'volumes': '{{ prometheus_msteams_default_volumes + prometheus_msteams_extra_volumes }}', 'dimensions': '{{ prometheus_msteams_dimensions }}'}}: 'enable_prometheus_msteams' is undefined"} 2025-02-04 09:46:11.917634 | orchestrator | 2025-02-04 09:46:11.917649 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:46:11.917663 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917678 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917693 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917708 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917722 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917751 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917766 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:46:11.917786 | orchestrator | 2025-02-04 09:46:11.917801 | orchestrator | 2025-02-04 09:46:11.917815 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:46:11.917829 | orchestrator | Tuesday 04 February 2025 09:46:09 +0000 (0:00:01.406) 0:00:05.647 ****** 2025-02-04 09:46:11.917900 | orchestrator | =============================================================================== 2025-02-04 09:46:11.917922 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 1.41s 2025-02-04 09:46:11.918452 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2025-02-04 09:46:11.918481 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.33s 2025-02-04 09:46:11.918496 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.17s 2025-02-04 09:46:11.918510 | orchestrator | 2025-02-04 09:46:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:11.918524 | orchestrator | 2025-02-04 09:46:11 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:11.918544 | orchestrator | 2025-02-04 09:46:11 | INFO  | Task 12c4f1cc-6084-468a-a0df-96cc8ab1a627 is in state SUCCESS 2025-02-04 09:46:11.919515 | orchestrator | 2025-02-04 09:46:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:14.967753 | orchestrator | 2025-02-04 09:46:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:14.968043 | orchestrator | 2025-02-04 09:46:14 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:14.969255 | orchestrator | 2025-02-04 09:46:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:14.969290 | orchestrator | 2025-02-04 09:46:14 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:14.970580 | orchestrator | 2025-02-04 09:46:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:18.024029 | orchestrator | 2025-02-04 09:46:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:18.024170 | orchestrator | 2025-02-04 09:46:18 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:18.025681 | orchestrator | 2025-02-04 09:46:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:18.027918 | orchestrator | 2025-02-04 09:46:18 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:18.028765 | orchestrator | 2025-02-04 09:46:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:21.067474 | orchestrator | 2025-02-04 09:46:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:21.067623 | orchestrator | 2025-02-04 09:46:21 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:21.067980 | orchestrator | 2025-02-04 09:46:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:21.068018 | orchestrator | 2025-02-04 09:46:21 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:21.068649 | orchestrator | 2025-02-04 09:46:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:24.109456 | orchestrator | 2025-02-04 09:46:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:24.109695 | orchestrator | 2025-02-04 09:46:24 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:24.111416 | orchestrator | 2025-02-04 09:46:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:24.111578 | orchestrator | 2025-02-04 09:46:24 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:24.112313 | orchestrator | 2025-02-04 09:46:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:27.149310 | orchestrator | 2025-02-04 09:46:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:27.149455 | orchestrator | 2025-02-04 09:46:27 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:27.149984 | orchestrator | 2025-02-04 09:46:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:27.150084 | orchestrator | 2025-02-04 09:46:27 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:27.151374 | orchestrator | 2025-02-04 09:46:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:30.179589 | orchestrator | 2025-02-04 09:46:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:30.179709 | orchestrator | 2025-02-04 09:46:30 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:30.180951 | orchestrator | 2025-02-04 09:46:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:30.185074 | orchestrator | 2025-02-04 09:46:30 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:30.185688 | orchestrator | 2025-02-04 09:46:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:30.185973 | orchestrator | 2025-02-04 09:46:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:33.219462 | orchestrator | 2025-02-04 09:46:33 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:33.219765 | orchestrator | 2025-02-04 09:46:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:33.219856 | orchestrator | 2025-02-04 09:46:33 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:33.220528 | orchestrator | 2025-02-04 09:46:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:36.261166 | orchestrator | 2025-02-04 09:46:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:36.261467 | orchestrator | 2025-02-04 09:46:36 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:36.262372 | orchestrator | 2025-02-04 09:46:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:36.262417 | orchestrator | 2025-02-04 09:46:36 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:36.262443 | orchestrator | 2025-02-04 09:46:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:39.297773 | orchestrator | 2025-02-04 09:46:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:39.297967 | orchestrator | 2025-02-04 09:46:39 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:39.298782 | orchestrator | 2025-02-04 09:46:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:39.300171 | orchestrator | 2025-02-04 09:46:39 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:39.302363 | orchestrator | 2025-02-04 09:46:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:42.347011 | orchestrator | 2025-02-04 09:46:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:42.347856 | orchestrator | 2025-02-04 09:46:42 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:42.350167 | orchestrator | 2025-02-04 09:46:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:42.350231 | orchestrator | 2025-02-04 09:46:42 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:42.350901 | orchestrator | 2025-02-04 09:46:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:42.350934 | orchestrator | 2025-02-04 09:46:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:45.389681 | orchestrator | 2025-02-04 09:46:45 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:45.389984 | orchestrator | 2025-02-04 09:46:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:45.390915 | orchestrator | 2025-02-04 09:46:45 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:45.392027 | orchestrator | 2025-02-04 09:46:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:48.423355 | orchestrator | 2025-02-04 09:46:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:48.423522 | orchestrator | 2025-02-04 09:46:48 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:48.424424 | orchestrator | 2025-02-04 09:46:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:48.428968 | orchestrator | 2025-02-04 09:46:48 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:48.431211 | orchestrator | 2025-02-04 09:46:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:51.468240 | orchestrator | 2025-02-04 09:46:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:51.468571 | orchestrator | 2025-02-04 09:46:51 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:51.469230 | orchestrator | 2025-02-04 09:46:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:51.469293 | orchestrator | 2025-02-04 09:46:51 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:51.469333 | orchestrator | 2025-02-04 09:46:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:54.514758 | orchestrator | 2025-02-04 09:46:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:54.514959 | orchestrator | 2025-02-04 09:46:54 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:54.519886 | orchestrator | 2025-02-04 09:46:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:54.521722 | orchestrator | 2025-02-04 09:46:54 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state STARTED 2025-02-04 09:46:54.525075 | orchestrator | 2025-02-04 09:46:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:46:57.553078 | orchestrator | 2025-02-04 09:46:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:46:57.553231 | orchestrator | 2025-02-04 09:46:57 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:46:57.554616 | orchestrator | 2025-02-04 09:46:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:46:57.554680 | orchestrator | 2025-02-04 09:46:57.554698 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-02-04 09:46:57.554713 | orchestrator | 2025-02-04 09:46:57.554727 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-02-04 09:46:57.554770 | orchestrator | 2025-02-04 09:46:57.554786 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-02-04 09:46:57.554834 | orchestrator | Tuesday 04 February 2025 09:45:36 +0000 (0:00:00.560) 0:00:00.560 ****** 2025-02-04 09:46:57.554857 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.554874 | orchestrator | 2025-02-04 09:46:57.554888 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-02-04 09:46:57.554903 | orchestrator | Tuesday 04 February 2025 09:45:38 +0000 (0:00:01.943) 0:00:02.504 ****** 2025-02-04 09:46:57.554917 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.554932 | orchestrator | 2025-02-04 09:46:57.554946 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-02-04 09:46:57.554960 | orchestrator | Tuesday 04 February 2025 09:45:39 +0000 (0:00:00.903) 0:00:03.407 ****** 2025-02-04 09:46:57.554974 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.554988 | orchestrator | 2025-02-04 09:46:57.555002 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-02-04 09:46:57.555016 | orchestrator | Tuesday 04 February 2025 09:45:40 +0000 (0:00:00.891) 0:00:04.299 ****** 2025-02-04 09:46:57.555030 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.555044 | orchestrator | 2025-02-04 09:46:57.555073 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-02-04 09:46:57.555088 | orchestrator | Tuesday 04 February 2025 09:45:41 +0000 (0:00:01.114) 0:00:05.413 ****** 2025-02-04 09:46:57.555102 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.555116 | orchestrator | 2025-02-04 09:46:57.555130 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-02-04 09:46:57.555144 | orchestrator | Tuesday 04 February 2025 09:45:42 +0000 (0:00:00.971) 0:00:06.384 ****** 2025-02-04 09:46:57.555158 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.555172 | orchestrator | 2025-02-04 09:46:57.555186 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-02-04 09:46:57.555203 | orchestrator | Tuesday 04 February 2025 09:45:43 +0000 (0:00:00.943) 0:00:07.328 ****** 2025-02-04 09:46:57.555218 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.555235 | orchestrator | 2025-02-04 09:46:57.555259 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-02-04 09:46:57.555283 | orchestrator | Tuesday 04 February 2025 09:45:45 +0000 (0:00:02.057) 0:00:09.386 ****** 2025-02-04 09:46:57.555306 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.555330 | orchestrator | 2025-02-04 09:46:57.555353 | orchestrator | TASK [Create admin user] ******************************************************* 2025-02-04 09:46:57.555378 | orchestrator | Tuesday 04 February 2025 09:45:46 +0000 (0:00:01.077) 0:00:10.464 ****** 2025-02-04 09:46:57.555401 | orchestrator | changed: [testbed-manager] 2025-02-04 09:46:57.555426 | orchestrator | 2025-02-04 09:46:57.555451 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-02-04 09:46:57.555474 | orchestrator | Tuesday 04 February 2025 09:46:04 +0000 (0:00:17.862) 0:00:28.326 ****** 2025-02-04 09:46:57.555499 | orchestrator | skipping: [testbed-manager] 2025-02-04 09:46:57.555515 | orchestrator | 2025-02-04 09:46:57.555529 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-04 09:46:57.555543 | orchestrator | 2025-02-04 09:46:57.555557 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-04 09:46:57.555572 | orchestrator | Tuesday 04 February 2025 09:46:05 +0000 (0:00:01.016) 0:00:29.343 ****** 2025-02-04 09:46:57.555586 | orchestrator | changed: [testbed-node-0] 2025-02-04 09:46:57.555600 | orchestrator | 2025-02-04 09:46:57.555614 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-04 09:46:57.555628 | orchestrator | 2025-02-04 09:46:57.555642 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-04 09:46:57.555656 | orchestrator | Tuesday 04 February 2025 09:46:07 +0000 (0:00:02.359) 0:00:31.702 ****** 2025-02-04 09:46:57.555670 | orchestrator | changed: [testbed-node-1] 2025-02-04 09:46:57.555697 | orchestrator | 2025-02-04 09:46:57.555711 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-02-04 09:46:57.555725 | orchestrator | 2025-02-04 09:46:57.555740 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-02-04 09:46:57.555754 | orchestrator | Tuesday 04 February 2025 09:46:09 +0000 (0:00:01.550) 0:00:33.253 ****** 2025-02-04 09:46:57.555768 | orchestrator | changed: [testbed-node-2] 2025-02-04 09:46:57.555782 | orchestrator | 2025-02-04 09:46:57.555796 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:46:57.555847 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-02-04 09:46:57.555863 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:46:57.555877 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:46:57.555892 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:46:57.555906 | orchestrator | 2025-02-04 09:46:57.555920 | orchestrator | 2025-02-04 09:46:57.555934 | orchestrator | 2025-02-04 09:46:57.555948 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:46:57.555962 | orchestrator | Tuesday 04 February 2025 09:46:10 +0000 (0:00:01.447) 0:00:34.701 ****** 2025-02-04 09:46:57.555976 | orchestrator | =============================================================================== 2025-02-04 09:46:57.555990 | orchestrator | Create admin user ------------------------------------------------------ 17.86s 2025-02-04 09:46:57.556017 | orchestrator | Restart ceph manager service -------------------------------------------- 5.36s 2025-02-04 09:46:57.556032 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2025-02-04 09:46:57.556046 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.94s 2025-02-04 09:46:57.556060 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.11s 2025-02-04 09:46:57.556074 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2025-02-04 09:46:57.556089 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 1.02s 2025-02-04 09:46:57.556110 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2025-02-04 09:46:57.556125 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.94s 2025-02-04 09:46:57.556139 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2025-02-04 09:46:57.556153 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.89s 2025-02-04 09:46:57.556167 | orchestrator | 2025-02-04 09:46:57.556181 | orchestrator | 2025-02-04 09:46:57.556195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:46:57.556209 | orchestrator | 2025-02-04 09:46:57.556224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:46:57.556238 | orchestrator | Tuesday 04 February 2025 09:45:33 +0000 (0:00:00.304) 0:00:00.304 ****** 2025-02-04 09:46:57.556252 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:46:57.556267 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:46:57.556281 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:46:57.556295 | orchestrator | 2025-02-04 09:46:57.556309 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:46:57.556324 | orchestrator | Tuesday 04 February 2025 09:45:34 +0000 (0:00:00.351) 0:00:00.656 ****** 2025-02-04 09:46:57.556338 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-02-04 09:46:57.556352 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-02-04 09:46:57.556367 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-02-04 09:46:57.556388 | orchestrator | 2025-02-04 09:46:57.556402 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-02-04 09:46:57.556416 | orchestrator | 2025-02-04 09:46:57.556430 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-02-04 09:46:57.556444 | orchestrator | Tuesday 04 February 2025 09:45:34 +0000 (0:00:00.398) 0:00:01.054 ****** 2025-02-04 09:46:57.556458 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:46:57.556473 | orchestrator | 2025-02-04 09:46:57.556487 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-02-04 09:46:57.556502 | orchestrator | Tuesday 04 February 2025 09:45:35 +0000 (0:00:00.613) 0:00:01.668 ****** 2025-02-04 09:46:57.556515 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (5 retries left). 2025-02-04 09:46:57.556529 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (4 retries left). 2025-02-04 09:46:57.556543 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (3 retries left). 2025-02-04 09:46:57.556558 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (2 retries left). 2025-02-04 09:46:57.556571 | orchestrator | FAILED - RETRYING: [testbed-node-0]: octavia | Creating services (1 retries left). 2025-02-04 09:46:57.556626 | orchestrator | failed: [testbed-node-0] (item=octavia (load-balancer)) => {"action": "os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": false, "item": {"description": "Octavia Load Balancing Service", "endpoints": [{"interface": "internal", "url": "https://api-int.testbed.osism.xyz:9876"}, {"interface": "public", "url": "https://api.testbed.osism.xyz:9876"}], "name": "octavia", "type": "load-balancer"}, "module_stderr": "Failed to discover available identity versions when contacting https://api-int.testbed.osism.xyz:5000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\n conn = connection.create_connection(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 95, in create_connection\n raise err\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\n sock.connect(sa)\nOSError: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 715, in urlopen\n httplib_response = self._make_request(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 404, in _make_request\n self._validate_conn(conn)\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1058, in _validate_conn\n conn.connect()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\n self.sock = conn = self._new_conn()\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connection.py\", line 186, in _new_conn\n raise NewConnectionError(\nurllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 113] No route to host\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 486, in send\n resp = conn.urlopen(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 799, in urlopen\n retries = retries.increment(\n File \"/opt/ansible/lib/python3.10/site-packages/urllib3/util/retry.py\", line 592, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1021, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/sessions.py\", line 703, in send\n r = adapter.send(request, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/requests/adapters.py\", line 519, in send\n raise ConnectionError(e, request=request)\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 133, in _do_create_plugin\n disc = self.get_discovery(session,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 605, in get_discovery\n return discover.get_discovery(session=session, url=url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 1459, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 539, in __init__\n self._data = get_version_data(session, url,\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/discover.py\", line 106, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1154, in get\n return self.request(url, 'GET', **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 930, in request\n resp = send(**kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1037, in _send_request\n raise exceptions.ConnectFailure(msg)\nkeystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/tmp/ansible-tmp-1738662411.113537-5099-167409473422837/AnsiballZ_catalog_service.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1738662411.113537-5099-167409473422837/AnsiballZ_catalog_service.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1738662411.113537-5099-167409473422837/AnsiballZ_catalog_service.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.catalog_service', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.catalog_service', _modlib_path=modlib_path),\n File \"/usr/lib/python3.10/runpy.py\", line 224, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_os_keystone_service_payload_tcilmnk0/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 211, in \n File \"/tmp/ansible_os_keystone_service_payload_tcilmnk0/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 207, in main\n File \"/tmp/ansible_os_keystone_service_payload_tcilmnk0/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_keystone_service_payload_tcilmnk0/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 113, in run\n File \"/tmp/ansible_os_keystone_service_payload_tcilmnk0/ansible_os_keystone_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 175, in _find\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 89, in __get__\n proxy = self._make_proxy(instance)\n File \"/opt/ansible/lib/python3.10/site-packages/openstack/service_description.py\", line 287, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 268, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/base.py\", line 131, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 203, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/opt/ansible/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py\", line 155, in _do_create_plugin\n raise exceptions.DiscoveryFailure(\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Unable to establish connection to https://api-int.testbed.osism.xyz:5000: HTTPSConnectionPool(host='api-int.testbed.osism.xyz', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-02-04 09:47:00.596340 | orchestrator | 2025-02-04 09:47:00.596463 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:47:00.596485 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-02-04 09:47:00.596503 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:47:00.596519 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-02-04 09:47:00.596533 | orchestrator | 2025-02-04 09:47:00.596548 | orchestrator | 2025-02-04 09:47:00.596562 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:47:00.596576 | orchestrator | Tuesday 04 February 2025 09:46:54 +0000 (0:01:19.446) 0:01:21.115 ****** 2025-02-04 09:47:00.596590 | orchestrator | =============================================================================== 2025-02-04 09:47:00.596631 | orchestrator | service-ks-register : octavia | Creating services ---------------------- 79.45s 2025-02-04 09:47:00.596646 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.61s 2025-02-04 09:47:00.596661 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-02-04 09:47:00.596675 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-02-04 09:47:00.596689 | orchestrator | 2025-02-04 09:46:57 | INFO  | Task 58ca35cf-61be-458d-b443-1faeda41be1b is in state SUCCESS 2025-02-04 09:47:00.596704 | orchestrator | 2025-02-04 09:46:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:00.596719 | orchestrator | 2025-02-04 09:46:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:00.596752 | orchestrator | 2025-02-04 09:47:00 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:47:03.635887 | orchestrator | 2025-02-04 09:47:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:03.636016 | orchestrator | 2025-02-04 09:47:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:03.636037 | orchestrator | 2025-02-04 09:47:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:03.636071 | orchestrator | 2025-02-04 09:47:03 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:47:03.636772 | orchestrator | 2025-02-04 09:47:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:03.638240 | orchestrator | 2025-02-04 09:47:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:06.675858 | orchestrator | 2025-02-04 09:47:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:06.676106 | orchestrator | 2025-02-04 09:47:06 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:47:06.677832 | orchestrator | 2025-02-04 09:47:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:06.677897 | orchestrator | 2025-02-04 09:47:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:09.719338 | orchestrator | 2025-02-04 09:47:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:09.719509 | orchestrator | 2025-02-04 09:47:09 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:47:09.720347 | orchestrator | 2025-02-04 09:47:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:09.720478 | orchestrator | 2025-02-04 09:47:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:12.764964 | orchestrator | 2025-02-04 09:47:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:12.765104 | orchestrator | 2025-02-04 09:47:12 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state STARTED 2025-02-04 09:47:12.765290 | orchestrator | 2025-02-04 09:47:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:12.766089 | orchestrator | 2025-02-04 09:47:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:15.804863 | orchestrator | 2025-02-04 09:47:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:15.805010 | orchestrator | 2025-02-04 09:47:15 | INFO  | Task f289e09a-9402-49d5-a8be-ae2603d60778 is in state SUCCESS 2025-02-04 09:47:15.806926 | orchestrator | 2025-02-04 09:47:15.806974 | orchestrator | 2025-02-04 09:47:15.806989 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-02-04 09:47:15.807031 | orchestrator | 2025-02-04 09:47:15.807047 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-02-04 09:47:15.807061 | orchestrator | Tuesday 04 February 2025 09:46:12 +0000 (0:00:00.289) 0:00:00.289 ****** 2025-02-04 09:47:15.807075 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:47:15.807091 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:47:15.807106 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:47:15.807120 | orchestrator | 2025-02-04 09:47:15.807134 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-02-04 09:47:15.807149 | orchestrator | Tuesday 04 February 2025 09:46:12 +0000 (0:00:00.485) 0:00:00.774 ****** 2025-02-04 09:47:15.807163 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-02-04 09:47:15.807192 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-02-04 09:47:15.807207 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-02-04 09:47:15.807221 | orchestrator | 2025-02-04 09:47:15.807235 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-02-04 09:47:15.807249 | orchestrator | 2025-02-04 09:47:15.807263 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-04 09:47:15.807277 | orchestrator | Tuesday 04 February 2025 09:46:13 +0000 (0:00:00.526) 0:00:01.301 ****** 2025-02-04 09:47:15.807292 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:47:15.807307 | orchestrator | 2025-02-04 09:47:15.807321 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-02-04 09:47:15.807415 | orchestrator | Tuesday 04 February 2025 09:46:14 +0000 (0:00:00.785) 0:00:02.087 ****** 2025-02-04 09:47:15.807488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.807512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.807527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.807543 | orchestrator | 2025-02-04 09:47:15.807558 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-02-04 09:47:15.807572 | orchestrator | Tuesday 04 February 2025 09:46:15 +0000 (0:00:01.277) 0:00:03.364 ****** 2025-02-04 09:47:15.807596 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-02-04 09:47:15.807612 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-02-04 09:47:15.807626 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:47:15.807641 | orchestrator | 2025-02-04 09:47:15.807656 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-02-04 09:47:15.807670 | orchestrator | Tuesday 04 February 2025 09:46:16 +0000 (0:00:00.661) 0:00:04.025 ****** 2025-02-04 09:47:15.807684 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-02-04 09:47:15.807698 | orchestrator | 2025-02-04 09:47:15.807712 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-02-04 09:47:15.807727 | orchestrator | Tuesday 04 February 2025 09:46:16 +0000 (0:00:00.740) 0:00:04.766 ****** 2025-02-04 09:47:15.807756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.807772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.807808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.807824 | orchestrator | 2025-02-04 09:47:15.807838 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-02-04 09:47:15.807853 | orchestrator | Tuesday 04 February 2025 09:46:18 +0000 (0:00:01.717) 0:00:06.484 ****** 2025-02-04 09:47:15.807868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:47:15.807883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:47:15.807905 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:47:15.807919 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:47:15.807945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:47:15.807960 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:47:15.807974 | orchestrator | 2025-02-04 09:47:15.807988 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-02-04 09:47:15.808003 | orchestrator | Tuesday 04 February 2025 09:46:19 +0000 (0:00:00.538) 0:00:07.022 ****** 2025-02-04 09:47:15.808017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:47:15.808032 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:47:15.808046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:47:15.808061 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:47:15.808806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-02-04 09:47:15.808840 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:47:15.808867 | orchestrator | 2025-02-04 09:47:15.808882 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-02-04 09:47:15.808897 | orchestrator | Tuesday 04 February 2025 09:46:19 +0000 (0:00:00.742) 0:00:07.764 ****** 2025-02-04 09:47:15.808911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.808926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.809001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.809020 | orchestrator | 2025-02-04 09:47:15.809034 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-02-04 09:47:15.809049 | orchestrator | Tuesday 04 February 2025 09:46:21 +0000 (0:00:01.245) 0:00:09.010 ****** 2025-02-04 09:47:15.809067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.809089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.809107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.809137 | orchestrator | 2025-02-04 09:47:15.809154 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-02-04 09:47:15.809177 | orchestrator | Tuesday 04 February 2025 09:46:22 +0000 (0:00:01.639) 0:00:10.650 ****** 2025-02-04 09:47:15.809192 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:47:15.809215 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:47:15.809229 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:47:15.809243 | orchestrator | 2025-02-04 09:47:15.809258 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-02-04 09:47:15.809272 | orchestrator | Tuesday 04 February 2025 09:46:23 +0000 (0:00:00.531) 0:00:11.181 ****** 2025-02-04 09:47:15.809286 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-04 09:47:15.809300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-04 09:47:15.809315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-02-04 09:47:15.809329 | orchestrator | 2025-02-04 09:47:15.809343 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-02-04 09:47:15.809357 | orchestrator | Tuesday 04 February 2025 09:46:24 +0000 (0:00:01.434) 0:00:12.616 ****** 2025-02-04 09:47:15.809372 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-04 09:47:15.809390 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-04 09:47:15.809406 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-02-04 09:47:15.809422 | orchestrator | 2025-02-04 09:47:15.809469 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-02-04 09:47:15.809487 | orchestrator | Tuesday 04 February 2025 09:46:26 +0000 (0:00:01.615) 0:00:14.232 ****** 2025-02-04 09:47:15.809503 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-02-04 09:47:15.809519 | orchestrator | 2025-02-04 09:47:15.809535 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-02-04 09:47:15.809551 | orchestrator | Tuesday 04 February 2025 09:46:26 +0000 (0:00:00.496) 0:00:14.728 ****** 2025-02-04 09:47:15.809567 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-02-04 09:47:15.809583 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-02-04 09:47:15.809599 | orchestrator | ok: [testbed-node-0] 2025-02-04 09:47:15.809615 | orchestrator | ok: [testbed-node-1] 2025-02-04 09:47:15.809631 | orchestrator | ok: [testbed-node-2] 2025-02-04 09:47:15.809648 | orchestrator | 2025-02-04 09:47:15.809674 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-02-04 09:47:15.809696 | orchestrator | Tuesday 04 February 2025 09:46:27 +0000 (0:00:00.965) 0:00:15.694 ****** 2025-02-04 09:47:15.809712 | orchestrator | skipping: [testbed-node-0] 2025-02-04 09:47:15.809728 | orchestrator | skipping: [testbed-node-1] 2025-02-04 09:47:15.809745 | orchestrator | skipping: [testbed-node-2] 2025-02-04 09:47:15.809760 | orchestrator | 2025-02-04 09:47:15.809774 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-02-04 09:47:15.809854 | orchestrator | Tuesday 04 February 2025 09:46:28 +0000 (0:00:00.374) 0:00:16.068 ****** 2025-02-04 09:47:15.809881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093120, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6085942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.809898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093120, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6085942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.809913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1093120, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6085942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.809929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093078, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5975935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.809986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093078, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5975935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1093078, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5975935, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093047, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5905929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093047, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5905929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1093047, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5905929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093104, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6025937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093104, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6025937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1093104, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6025937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093022, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5825925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093022, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5825925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1093022, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5825925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093054, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5945933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093054, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5945933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1093054, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5945933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093096, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6015937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093096, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6015937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1093096, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6015937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093019, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5805922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093019, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5805922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1093019, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5805922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092932, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.415581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092932, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.415581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1092932, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.415581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093026, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5835924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093026, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5835924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092976, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5665913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1093026, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5835924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092976, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5665913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093088, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5995936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1092976, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5665913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093088, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5995936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093029, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5855925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1093088, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5995936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093029, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5855925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093112, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.604594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1093029, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5855925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093112, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.604594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093012, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.578592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1093112, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.604594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093012, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.578592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093068, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5965934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1093012, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.578592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093068, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5965934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092935, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.4245815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1093068, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5965934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092935, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.4245815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092997, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5735917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1092935, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.4245815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092997, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5735917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093040, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5875928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1092997, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5735917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.810990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093040, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5875928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1093299, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.663598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1093040, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.5875928, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1093299, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.663598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1093273, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6405964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1093299, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.663598, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1093273, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6405964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093144, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6115944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1093273, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6405964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093144, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6115944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093435, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6875997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1093144, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6115944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093435, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6875997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093151, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6125944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093435, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6875997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093151, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6125944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093407, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.677599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1093151, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6125944, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093407, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.677599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1093454, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6915998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093407, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.677599, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1093454, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6915998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093361, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6705985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1093454, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6915998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093361, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6705985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093397, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6755989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093361, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6705985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093397, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6755989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093163, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6145947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093397, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6755989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093163, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6145947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1093278, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6425965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1093163, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6145947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1093278, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6425965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1093479, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1093479, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1093278, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6425965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093413, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6825993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1093479, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6936, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093413, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6825993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1093181, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6235952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093413, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6825993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1093181, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6235952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1093173, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6155946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1093173, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6155946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1093181, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6235952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1093209, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6275954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1093209, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6275954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1093173, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6155946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1093224, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6395962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1093224, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6395962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1093209, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6275954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1093288, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6435966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1093288, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6435966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093383, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6725986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1093224, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6395962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093383, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6725986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1093296, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6445966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.811995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1093288, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6435966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1093296, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6445966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1093492, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093383, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6725986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1093492, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1093296, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6445966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1093492, 'dev': 203, 'nlink': 1, 'atime': 1738655511.0, 'mtime': 1738655511.0, 'ctime': 1738659311.6956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-02-04 09:47:15.812110 | orchestrator | 2025-02-04 09:47:15.812124 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-02-04 09:47:15.812136 | orchestrator | Tuesday 04 February 2025 09:47:08 +0000 (0:00:40.131) 0:00:56.199 ****** 2025-02-04 09:47:15.812156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.812169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.812183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'nexus.testbed.osism.xyz:8193/kolla/grafana:2024.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-02-04 09:47:15.812195 | orchestrator | 2025-02-04 09:47:15.812208 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-02-04 09:47:15.812221 | orchestrator | Tuesday 04 February 2025 09:47:09 +0000 (0:00:01.463) 0:00:57.662 ****** 2025-02-04 09:47:15.812235 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "mysql_db", "changed": false, "msg": "unable to find /var/lib/ansible/.my.cnf. Exception message: (2003, \"Can't connect to MySQL server on 'api-int.testbed.osism.xyz' ([Errno 113] No route to host)\")"} 2025-02-04 09:47:15.812249 | orchestrator | 2025-02-04 09:47:15.812266 | orchestrator | PLAY RECAP ********************************************************************* 2025-02-04 09:47:18.839540 | orchestrator | testbed-node-0 : ok=15  changed=8  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-02-04 09:47:18.839658 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:47:18.839679 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-02-04 09:47:18.839695 | orchestrator | 2025-02-04 09:47:18.839711 | orchestrator | 2025-02-04 09:47:18.839726 | orchestrator | TASKS RECAP ******************************************************************** 2025-02-04 09:47:18.839742 | orchestrator | Tuesday 04 February 2025 09:47:15 +0000 (0:00:05.232) 0:01:02.895 ****** 2025-02-04 09:47:18.839756 | orchestrator | =============================================================================== 2025-02-04 09:47:18.839770 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 40.13s 2025-02-04 09:47:18.839834 | orchestrator | grafana : Creating grafana database ------------------------------------- 5.23s 2025-02-04 09:47:18.839850 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.72s 2025-02-04 09:47:18.839891 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.64s 2025-02-04 09:47:18.839906 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.62s 2025-02-04 09:47:18.839920 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.46s 2025-02-04 09:47:18.839934 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.43s 2025-02-04 09:47:18.839948 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.28s 2025-02-04 09:47:18.839962 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2025-02-04 09:47:18.839976 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.97s 2025-02-04 09:47:18.839990 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2025-02-04 09:47:18.840004 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.74s 2025-02-04 09:47:18.840018 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2025-02-04 09:47:18.840033 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.66s 2025-02-04 09:47:18.840047 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.54s 2025-02-04 09:47:18.840062 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.53s 2025-02-04 09:47:18.840079 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-02-04 09:47:18.840094 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.50s 2025-02-04 09:47:18.840110 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-02-04 09:47:18.840127 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.37s 2025-02-04 09:47:18.840143 | orchestrator | 2025-02-04 09:47:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:18.840159 | orchestrator | 2025-02-04 09:47:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:18.840175 | orchestrator | 2025-02-04 09:47:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:18.840225 | orchestrator | 2025-02-04 09:47:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:18.840633 | orchestrator | 2025-02-04 09:47:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:21.877211 | orchestrator | 2025-02-04 09:47:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:21.877345 | orchestrator | 2025-02-04 09:47:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:24.921861 | orchestrator | 2025-02-04 09:47:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:24.921995 | orchestrator | 2025-02-04 09:47:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:24.922077 | orchestrator | 2025-02-04 09:47:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:24.923537 | orchestrator | 2025-02-04 09:47:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:27.968699 | orchestrator | 2025-02-04 09:47:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:27.968932 | orchestrator | 2025-02-04 09:47:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:31.003478 | orchestrator | 2025-02-04 09:47:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:31.003651 | orchestrator | 2025-02-04 09:47:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:31.003709 | orchestrator | 2025-02-04 09:47:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:31.005199 | orchestrator | 2025-02-04 09:47:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:34.048020 | orchestrator | 2025-02-04 09:47:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:34.048181 | orchestrator | 2025-02-04 09:47:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:34.049531 | orchestrator | 2025-02-04 09:47:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:37.087511 | orchestrator | 2025-02-04 09:47:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:37.087646 | orchestrator | 2025-02-04 09:47:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:40.121544 | orchestrator | 2025-02-04 09:47:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:40.121634 | orchestrator | 2025-02-04 09:47:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:40.121655 | orchestrator | 2025-02-04 09:47:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:40.122563 | orchestrator | 2025-02-04 09:47:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:43.161826 | orchestrator | 2025-02-04 09:47:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:43.161968 | orchestrator | 2025-02-04 09:47:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:43.162464 | orchestrator | 2025-02-04 09:47:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:46.221266 | orchestrator | 2025-02-04 09:47:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:46.221409 | orchestrator | 2025-02-04 09:47:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:49.264684 | orchestrator | 2025-02-04 09:47:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:49.264851 | orchestrator | 2025-02-04 09:47:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:49.264891 | orchestrator | 2025-02-04 09:47:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:49.267171 | orchestrator | 2025-02-04 09:47:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:52.303996 | orchestrator | 2025-02-04 09:47:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:52.304136 | orchestrator | 2025-02-04 09:47:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:52.304648 | orchestrator | 2025-02-04 09:47:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:55.335284 | orchestrator | 2025-02-04 09:47:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:55.335401 | orchestrator | 2025-02-04 09:47:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:55.335655 | orchestrator | 2025-02-04 09:47:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:47:58.372621 | orchestrator | 2025-02-04 09:47:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:47:58.372752 | orchestrator | 2025-02-04 09:47:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:47:58.373531 | orchestrator | 2025-02-04 09:47:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:01.411717 | orchestrator | 2025-02-04 09:47:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:01.411922 | orchestrator | 2025-02-04 09:48:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:01.412211 | orchestrator | 2025-02-04 09:48:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:04.457504 | orchestrator | 2025-02-04 09:48:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:04.457625 | orchestrator | 2025-02-04 09:48:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:04.458533 | orchestrator | 2025-02-04 09:48:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:07.493491 | orchestrator | 2025-02-04 09:48:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:07.493867 | orchestrator | 2025-02-04 09:48:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:10.533177 | orchestrator | 2025-02-04 09:48:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:10.533323 | orchestrator | 2025-02-04 09:48:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:10.534136 | orchestrator | 2025-02-04 09:48:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:13.574729 | orchestrator | 2025-02-04 09:48:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:13.574901 | orchestrator | 2025-02-04 09:48:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:13.574941 | orchestrator | 2025-02-04 09:48:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:13.575910 | orchestrator | 2025-02-04 09:48:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:16.614970 | orchestrator | 2025-02-04 09:48:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:16.615074 | orchestrator | 2025-02-04 09:48:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:16.616377 | orchestrator | 2025-02-04 09:48:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:19.656908 | orchestrator | 2025-02-04 09:48:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:19.657050 | orchestrator | 2025-02-04 09:48:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:22.693386 | orchestrator | 2025-02-04 09:48:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:22.693527 | orchestrator | 2025-02-04 09:48:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:22.693571 | orchestrator | 2025-02-04 09:48:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:22.693797 | orchestrator | 2025-02-04 09:48:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:25.739090 | orchestrator | 2025-02-04 09:48:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:25.739186 | orchestrator | 2025-02-04 09:48:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:25.741489 | orchestrator | 2025-02-04 09:48:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:28.778116 | orchestrator | 2025-02-04 09:48:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:28.778267 | orchestrator | 2025-02-04 09:48:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:31.815065 | orchestrator | 2025-02-04 09:48:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:31.815202 | orchestrator | 2025-02-04 09:48:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:31.815236 | orchestrator | 2025-02-04 09:48:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:34.854996 | orchestrator | 2025-02-04 09:48:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:34.855114 | orchestrator | 2025-02-04 09:48:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:34.855151 | orchestrator | 2025-02-04 09:48:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:34.855442 | orchestrator | 2025-02-04 09:48:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:37.896723 | orchestrator | 2025-02-04 09:48:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:37.896941 | orchestrator | 2025-02-04 09:48:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:37.897942 | orchestrator | 2025-02-04 09:48:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:40.941079 | orchestrator | 2025-02-04 09:48:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:40.941215 | orchestrator | 2025-02-04 09:48:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:40.941501 | orchestrator | 2025-02-04 09:48:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:43.979811 | orchestrator | 2025-02-04 09:48:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:43.979933 | orchestrator | 2025-02-04 09:48:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:43.981347 | orchestrator | 2025-02-04 09:48:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:47.022902 | orchestrator | 2025-02-04 09:48:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:47.023044 | orchestrator | 2025-02-04 09:48:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:47.023380 | orchestrator | 2025-02-04 09:48:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:50.060389 | orchestrator | 2025-02-04 09:48:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:50.060518 | orchestrator | 2025-02-04 09:48:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:53.094236 | orchestrator | 2025-02-04 09:48:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:53.094456 | orchestrator | 2025-02-04 09:48:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:53.094500 | orchestrator | 2025-02-04 09:48:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:56.130791 | orchestrator | 2025-02-04 09:48:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:56.130911 | orchestrator | 2025-02-04 09:48:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:56.130948 | orchestrator | 2025-02-04 09:48:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:59.163664 | orchestrator | 2025-02-04 09:48:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:48:59.163837 | orchestrator | 2025-02-04 09:48:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:48:59.163875 | orchestrator | 2025-02-04 09:48:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:48:59.164256 | orchestrator | 2025-02-04 09:48:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:02.202509 | orchestrator | 2025-02-04 09:48:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:02.202820 | orchestrator | 2025-02-04 09:49:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:05.242559 | orchestrator | 2025-02-04 09:49:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:05.242649 | orchestrator | 2025-02-04 09:49:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:05.242669 | orchestrator | 2025-02-04 09:49:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:08.278214 | orchestrator | 2025-02-04 09:49:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:08.278350 | orchestrator | 2025-02-04 09:49:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:08.278393 | orchestrator | 2025-02-04 09:49:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:11.319360 | orchestrator | 2025-02-04 09:49:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:11.319485 | orchestrator | 2025-02-04 09:49:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:11.319525 | orchestrator | 2025-02-04 09:49:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:14.355570 | orchestrator | 2025-02-04 09:49:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:14.355692 | orchestrator | 2025-02-04 09:49:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:14.355767 | orchestrator | 2025-02-04 09:49:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:14.356312 | orchestrator | 2025-02-04 09:49:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:17.410347 | orchestrator | 2025-02-04 09:49:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:17.410487 | orchestrator | 2025-02-04 09:49:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:17.412165 | orchestrator | 2025-02-04 09:49:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:20.452085 | orchestrator | 2025-02-04 09:49:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:20.452176 | orchestrator | 2025-02-04 09:49:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:20.452770 | orchestrator | 2025-02-04 09:49:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:23.488227 | orchestrator | 2025-02-04 09:49:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:23.488538 | orchestrator | 2025-02-04 09:49:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:26.523016 | orchestrator | 2025-02-04 09:49:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:26.523134 | orchestrator | 2025-02-04 09:49:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:26.523172 | orchestrator | 2025-02-04 09:49:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:26.524185 | orchestrator | 2025-02-04 09:49:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:29.560044 | orchestrator | 2025-02-04 09:49:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:29.560275 | orchestrator | 2025-02-04 09:49:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:29.560959 | orchestrator | 2025-02-04 09:49:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:32.606278 | orchestrator | 2025-02-04 09:49:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:32.606448 | orchestrator | 2025-02-04 09:49:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:32.607293 | orchestrator | 2025-02-04 09:49:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:35.652097 | orchestrator | 2025-02-04 09:49:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:35.652238 | orchestrator | 2025-02-04 09:49:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:35.653470 | orchestrator | 2025-02-04 09:49:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:38.689427 | orchestrator | 2025-02-04 09:49:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:38.689562 | orchestrator | 2025-02-04 09:49:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:38.690071 | orchestrator | 2025-02-04 09:49:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:41.726436 | orchestrator | 2025-02-04 09:49:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:41.726599 | orchestrator | 2025-02-04 09:49:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:41.727135 | orchestrator | 2025-02-04 09:49:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:44.767852 | orchestrator | 2025-02-04 09:49:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:44.767984 | orchestrator | 2025-02-04 09:49:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:44.768235 | orchestrator | 2025-02-04 09:49:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:47.805146 | orchestrator | 2025-02-04 09:49:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:47.805283 | orchestrator | 2025-02-04 09:49:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:47.805882 | orchestrator | 2025-02-04 09:49:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:50.847307 | orchestrator | 2025-02-04 09:49:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:50.847443 | orchestrator | 2025-02-04 09:49:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:53.888174 | orchestrator | 2025-02-04 09:49:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:53.888296 | orchestrator | 2025-02-04 09:49:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:53.888335 | orchestrator | 2025-02-04 09:49:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:53.889436 | orchestrator | 2025-02-04 09:49:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:56.925374 | orchestrator | 2025-02-04 09:49:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:56.925493 | orchestrator | 2025-02-04 09:49:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:49:59.962849 | orchestrator | 2025-02-04 09:49:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:49:59.963025 | orchestrator | 2025-02-04 09:49:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:49:59.963725 | orchestrator | 2025-02-04 09:49:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:03.000531 | orchestrator | 2025-02-04 09:49:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:03.000731 | orchestrator | 2025-02-04 09:49:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:03.000774 | orchestrator | 2025-02-04 09:50:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:03.001268 | orchestrator | 2025-02-04 09:50:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:06.037787 | orchestrator | 2025-02-04 09:50:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:06.037930 | orchestrator | 2025-02-04 09:50:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:06.038254 | orchestrator | 2025-02-04 09:50:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:09.063772 | orchestrator | 2025-02-04 09:50:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:09.063909 | orchestrator | 2025-02-04 09:50:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:12.105884 | orchestrator | 2025-02-04 09:50:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:12.106010 | orchestrator | 2025-02-04 09:50:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:12.106095 | orchestrator | 2025-02-04 09:50:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:15.139144 | orchestrator | 2025-02-04 09:50:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:15.139262 | orchestrator | 2025-02-04 09:50:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:15.139299 | orchestrator | 2025-02-04 09:50:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:15.139729 | orchestrator | 2025-02-04 09:50:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:18.165869 | orchestrator | 2025-02-04 09:50:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:18.165961 | orchestrator | 2025-02-04 09:50:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:21.191799 | orchestrator | 2025-02-04 09:50:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:21.191919 | orchestrator | 2025-02-04 09:50:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:21.191956 | orchestrator | 2025-02-04 09:50:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:21.193177 | orchestrator | 2025-02-04 09:50:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:24.232135 | orchestrator | 2025-02-04 09:50:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:24.232270 | orchestrator | 2025-02-04 09:50:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:24.233184 | orchestrator | 2025-02-04 09:50:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:27.272478 | orchestrator | 2025-02-04 09:50:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:27.272607 | orchestrator | 2025-02-04 09:50:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:27.273022 | orchestrator | 2025-02-04 09:50:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:30.302828 | orchestrator | 2025-02-04 09:50:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:30.302959 | orchestrator | 2025-02-04 09:50:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:30.304098 | orchestrator | 2025-02-04 09:50:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:33.331205 | orchestrator | 2025-02-04 09:50:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:33.331338 | orchestrator | 2025-02-04 09:50:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:33.332173 | orchestrator | 2025-02-04 09:50:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:36.363868 | orchestrator | 2025-02-04 09:50:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:36.364106 | orchestrator | 2025-02-04 09:50:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:39.398913 | orchestrator | 2025-02-04 09:50:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:39.399037 | orchestrator | 2025-02-04 09:50:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:39.399074 | orchestrator | 2025-02-04 09:50:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:39.401036 | orchestrator | 2025-02-04 09:50:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:42.432938 | orchestrator | 2025-02-04 09:50:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:42.433071 | orchestrator | 2025-02-04 09:50:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:42.434068 | orchestrator | 2025-02-04 09:50:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:45.474768 | orchestrator | 2025-02-04 09:50:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:45.474890 | orchestrator | 2025-02-04 09:50:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:45.476604 | orchestrator | 2025-02-04 09:50:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:48.519219 | orchestrator | 2025-02-04 09:50:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:48.519320 | orchestrator | 2025-02-04 09:50:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:48.519733 | orchestrator | 2025-02-04 09:50:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:51.556284 | orchestrator | 2025-02-04 09:50:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:51.556417 | orchestrator | 2025-02-04 09:50:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:51.557091 | orchestrator | 2025-02-04 09:50:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:54.609900 | orchestrator | 2025-02-04 09:50:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:54.610083 | orchestrator | 2025-02-04 09:50:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:50:57.642007 | orchestrator | 2025-02-04 09:50:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:50:57.642176 | orchestrator | 2025-02-04 09:50:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:50:57.642239 | orchestrator | 2025-02-04 09:50:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:00.674747 | orchestrator | 2025-02-04 09:50:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:00.674860 | orchestrator | 2025-02-04 09:50:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:00.674895 | orchestrator | 2025-02-04 09:51:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:03.712271 | orchestrator | 2025-02-04 09:51:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:03.712393 | orchestrator | 2025-02-04 09:51:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:03.712437 | orchestrator | 2025-02-04 09:51:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:03.713092 | orchestrator | 2025-02-04 09:51:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:06.747027 | orchestrator | 2025-02-04 09:51:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:06.747141 | orchestrator | 2025-02-04 09:51:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:06.747896 | orchestrator | 2025-02-04 09:51:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:09.783051 | orchestrator | 2025-02-04 09:51:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:09.783179 | orchestrator | 2025-02-04 09:51:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:09.783482 | orchestrator | 2025-02-04 09:51:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:12.818889 | orchestrator | 2025-02-04 09:51:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:12.818991 | orchestrator | 2025-02-04 09:51:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:12.819934 | orchestrator | 2025-02-04 09:51:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:15.859140 | orchestrator | 2025-02-04 09:51:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:15.859280 | orchestrator | 2025-02-04 09:51:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:15.859894 | orchestrator | 2025-02-04 09:51:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:18.895596 | orchestrator | 2025-02-04 09:51:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:18.895760 | orchestrator | 2025-02-04 09:51:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:18.897469 | orchestrator | 2025-02-04 09:51:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:21.930517 | orchestrator | 2025-02-04 09:51:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:21.930695 | orchestrator | 2025-02-04 09:51:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:24.975816 | orchestrator | 2025-02-04 09:51:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:24.975950 | orchestrator | 2025-02-04 09:51:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:24.975990 | orchestrator | 2025-02-04 09:51:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:24.977294 | orchestrator | 2025-02-04 09:51:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:28.032113 | orchestrator | 2025-02-04 09:51:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:28.032256 | orchestrator | 2025-02-04 09:51:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:31.072076 | orchestrator | 2025-02-04 09:51:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:31.072356 | orchestrator | 2025-02-04 09:51:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:31.072399 | orchestrator | 2025-02-04 09:51:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:34.100861 | orchestrator | 2025-02-04 09:51:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:34.100976 | orchestrator | 2025-02-04 09:51:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:34.101011 | orchestrator | 2025-02-04 09:51:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:34.101529 | orchestrator | 2025-02-04 09:51:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:37.136816 | orchestrator | 2025-02-04 09:51:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:37.136937 | orchestrator | 2025-02-04 09:51:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:37.137940 | orchestrator | 2025-02-04 09:51:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:40.175127 | orchestrator | 2025-02-04 09:51:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:40.175274 | orchestrator | 2025-02-04 09:51:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:40.175787 | orchestrator | 2025-02-04 09:51:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:43.212765 | orchestrator | 2025-02-04 09:51:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:43.212909 | orchestrator | 2025-02-04 09:51:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:46.256335 | orchestrator | 2025-02-04 09:51:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:46.256493 | orchestrator | 2025-02-04 09:51:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:46.256533 | orchestrator | 2025-02-04 09:51:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:49.293336 | orchestrator | 2025-02-04 09:51:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:49.293420 | orchestrator | 2025-02-04 09:51:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:49.293440 | orchestrator | 2025-02-04 09:51:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:52.328854 | orchestrator | 2025-02-04 09:51:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:52.328990 | orchestrator | 2025-02-04 09:51:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:52.329030 | orchestrator | 2025-02-04 09:51:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:55.361332 | orchestrator | 2025-02-04 09:51:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:55.361427 | orchestrator | 2025-02-04 09:51:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:55.361451 | orchestrator | 2025-02-04 09:51:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:51:58.391996 | orchestrator | 2025-02-04 09:51:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:51:58.392118 | orchestrator | 2025-02-04 09:51:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:51:58.392156 | orchestrator | 2025-02-04 09:51:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:01.439058 | orchestrator | 2025-02-04 09:51:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:01.439193 | orchestrator | 2025-02-04 09:51:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:01.439240 | orchestrator | 2025-02-04 09:52:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:04.480716 | orchestrator | 2025-02-04 09:52:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:04.480842 | orchestrator | 2025-02-04 09:52:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:04.480878 | orchestrator | 2025-02-04 09:52:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:04.482235 | orchestrator | 2025-02-04 09:52:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:07.521082 | orchestrator | 2025-02-04 09:52:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:07.521200 | orchestrator | 2025-02-04 09:52:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:07.521608 | orchestrator | 2025-02-04 09:52:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:10.563010 | orchestrator | 2025-02-04 09:52:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:10.563139 | orchestrator | 2025-02-04 09:52:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:10.563283 | orchestrator | 2025-02-04 09:52:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:13.591890 | orchestrator | 2025-02-04 09:52:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:13.592038 | orchestrator | 2025-02-04 09:52:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:13.592871 | orchestrator | 2025-02-04 09:52:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:16.628383 | orchestrator | 2025-02-04 09:52:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:16.628520 | orchestrator | 2025-02-04 09:52:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:16.628831 | orchestrator | 2025-02-04 09:52:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:19.668556 | orchestrator | 2025-02-04 09:52:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:19.668935 | orchestrator | 2025-02-04 09:52:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:22.709003 | orchestrator | 2025-02-04 09:52:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:22.709124 | orchestrator | 2025-02-04 09:52:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:22.709161 | orchestrator | 2025-02-04 09:52:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:22.710594 | orchestrator | 2025-02-04 09:52:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:25.762502 | orchestrator | 2025-02-04 09:52:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:25.762795 | orchestrator | 2025-02-04 09:52:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:28.809897 | orchestrator | 2025-02-04 09:52:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:28.810092 | orchestrator | 2025-02-04 09:52:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:28.810136 | orchestrator | 2025-02-04 09:52:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:28.812229 | orchestrator | 2025-02-04 09:52:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:31.863040 | orchestrator | 2025-02-04 09:52:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:31.863226 | orchestrator | 2025-02-04 09:52:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:31.864076 | orchestrator | 2025-02-04 09:52:31 | INFO  | Task 13c92981-cae2-41bd-bd8c-05bd71aaeab0 is in state STARTED 2025-02-04 09:52:31.865806 | orchestrator | 2025-02-04 09:52:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:31.866352 | orchestrator | 2025-02-04 09:52:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:34.905654 | orchestrator | 2025-02-04 09:52:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:34.907032 | orchestrator | 2025-02-04 09:52:34 | INFO  | Task 13c92981-cae2-41bd-bd8c-05bd71aaeab0 is in state STARTED 2025-02-04 09:52:34.907147 | orchestrator | 2025-02-04 09:52:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:37.942784 | orchestrator | 2025-02-04 09:52:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:37.942927 | orchestrator | 2025-02-04 09:52:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:37.943848 | orchestrator | 2025-02-04 09:52:37 | INFO  | Task 13c92981-cae2-41bd-bd8c-05bd71aaeab0 is in state STARTED 2025-02-04 09:52:37.943883 | orchestrator | 2025-02-04 09:52:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:40.981646 | orchestrator | 2025-02-04 09:52:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:40.981769 | orchestrator | 2025-02-04 09:52:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:40.982351 | orchestrator | 2025-02-04 09:52:40 | INFO  | Task 13c92981-cae2-41bd-bd8c-05bd71aaeab0 is in state STARTED 2025-02-04 09:52:40.984232 | orchestrator | 2025-02-04 09:52:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:40.984511 | orchestrator | 2025-02-04 09:52:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:44.021714 | orchestrator | 2025-02-04 09:52:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:44.022480 | orchestrator | 2025-02-04 09:52:44 | INFO  | Task 13c92981-cae2-41bd-bd8c-05bd71aaeab0 is in state SUCCESS 2025-02-04 09:52:44.023757 | orchestrator | 2025-02-04 09:52:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:47.058634 | orchestrator | 2025-02-04 09:52:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:47.058774 | orchestrator | 2025-02-04 09:52:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:47.059088 | orchestrator | 2025-02-04 09:52:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:50.091350 | orchestrator | 2025-02-04 09:52:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:50.091517 | orchestrator | 2025-02-04 09:52:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:50.092452 | orchestrator | 2025-02-04 09:52:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:53.127790 | orchestrator | 2025-02-04 09:52:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:53.127895 | orchestrator | 2025-02-04 09:52:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:53.130173 | orchestrator | 2025-02-04 09:52:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:56.171375 | orchestrator | 2025-02-04 09:52:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:56.171474 | orchestrator | 2025-02-04 09:52:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:52:56.172123 | orchestrator | 2025-02-04 09:52:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:52:59.207388 | orchestrator | 2025-02-04 09:52:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:52:59.207550 | orchestrator | 2025-02-04 09:52:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:02.239330 | orchestrator | 2025-02-04 09:52:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:02.239441 | orchestrator | 2025-02-04 09:52:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:02.239472 | orchestrator | 2025-02-04 09:53:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:05.267496 | orchestrator | 2025-02-04 09:53:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:05.267692 | orchestrator | 2025-02-04 09:53:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:05.267736 | orchestrator | 2025-02-04 09:53:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:05.267948 | orchestrator | 2025-02-04 09:53:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:08.298413 | orchestrator | 2025-02-04 09:53:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:08.298559 | orchestrator | 2025-02-04 09:53:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:11.327236 | orchestrator | 2025-02-04 09:53:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:11.327359 | orchestrator | 2025-02-04 09:53:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:11.327397 | orchestrator | 2025-02-04 09:53:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:14.354574 | orchestrator | 2025-02-04 09:53:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:14.354737 | orchestrator | 2025-02-04 09:53:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:14.354777 | orchestrator | 2025-02-04 09:53:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:17.387798 | orchestrator | 2025-02-04 09:53:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:17.388028 | orchestrator | 2025-02-04 09:53:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:17.388092 | orchestrator | 2025-02-04 09:53:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:20.417179 | orchestrator | 2025-02-04 09:53:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:20.417333 | orchestrator | 2025-02-04 09:53:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:20.417375 | orchestrator | 2025-02-04 09:53:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:23.457458 | orchestrator | 2025-02-04 09:53:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:23.458335 | orchestrator | 2025-02-04 09:53:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:23.458395 | orchestrator | 2025-02-04 09:53:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:23.458921 | orchestrator | 2025-02-04 09:53:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:26.497281 | orchestrator | 2025-02-04 09:53:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:26.497416 | orchestrator | 2025-02-04 09:53:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:26.498145 | orchestrator | 2025-02-04 09:53:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:29.537474 | orchestrator | 2025-02-04 09:53:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:29.537655 | orchestrator | 2025-02-04 09:53:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:29.539014 | orchestrator | 2025-02-04 09:53:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:32.573140 | orchestrator | 2025-02-04 09:53:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:32.573270 | orchestrator | 2025-02-04 09:53:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:32.573807 | orchestrator | 2025-02-04 09:53:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:35.608730 | orchestrator | 2025-02-04 09:53:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:35.608838 | orchestrator | 2025-02-04 09:53:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:38.642738 | orchestrator | 2025-02-04 09:53:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:38.642864 | orchestrator | 2025-02-04 09:53:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:38.642903 | orchestrator | 2025-02-04 09:53:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:41.682484 | orchestrator | 2025-02-04 09:53:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:41.682666 | orchestrator | 2025-02-04 09:53:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:41.682718 | orchestrator | 2025-02-04 09:53:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:41.683212 | orchestrator | 2025-02-04 09:53:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:44.726863 | orchestrator | 2025-02-04 09:53:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:44.727032 | orchestrator | 2025-02-04 09:53:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:47.767828 | orchestrator | 2025-02-04 09:53:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:47.767941 | orchestrator | 2025-02-04 09:53:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:47.767974 | orchestrator | 2025-02-04 09:53:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:47.768285 | orchestrator | 2025-02-04 09:53:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:50.804028 | orchestrator | 2025-02-04 09:53:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:50.804193 | orchestrator | 2025-02-04 09:53:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:53.844646 | orchestrator | 2025-02-04 09:53:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:53.844758 | orchestrator | 2025-02-04 09:53:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:53.844793 | orchestrator | 2025-02-04 09:53:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:53.845248 | orchestrator | 2025-02-04 09:53:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:56.886153 | orchestrator | 2025-02-04 09:53:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:56.886316 | orchestrator | 2025-02-04 09:53:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:59.931265 | orchestrator | 2025-02-04 09:53:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:53:59.931398 | orchestrator | 2025-02-04 09:53:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:53:59.931438 | orchestrator | 2025-02-04 09:53:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:53:59.931736 | orchestrator | 2025-02-04 09:53:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:02.975156 | orchestrator | 2025-02-04 09:53:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:02.975291 | orchestrator | 2025-02-04 09:54:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:02.977240 | orchestrator | 2025-02-04 09:54:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:06.018985 | orchestrator | 2025-02-04 09:54:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:06.019126 | orchestrator | 2025-02-04 09:54:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:06.019901 | orchestrator | 2025-02-04 09:54:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:09.060241 | orchestrator | 2025-02-04 09:54:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:09.060403 | orchestrator | 2025-02-04 09:54:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:09.061041 | orchestrator | 2025-02-04 09:54:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:12.108872 | orchestrator | 2025-02-04 09:54:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:12.108995 | orchestrator | 2025-02-04 09:54:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:12.109511 | orchestrator | 2025-02-04 09:54:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:15.138358 | orchestrator | 2025-02-04 09:54:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:15.138450 | orchestrator | 2025-02-04 09:54:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:15.138853 | orchestrator | 2025-02-04 09:54:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:15.138952 | orchestrator | 2025-02-04 09:54:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:18.171836 | orchestrator | 2025-02-04 09:54:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:18.173742 | orchestrator | 2025-02-04 09:54:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:21.205277 | orchestrator | 2025-02-04 09:54:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:21.205440 | orchestrator | 2025-02-04 09:54:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:24.236854 | orchestrator | 2025-02-04 09:54:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:24.236978 | orchestrator | 2025-02-04 09:54:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:24.237016 | orchestrator | 2025-02-04 09:54:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:24.237923 | orchestrator | 2025-02-04 09:54:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:27.276071 | orchestrator | 2025-02-04 09:54:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:27.276224 | orchestrator | 2025-02-04 09:54:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:27.276490 | orchestrator | 2025-02-04 09:54:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:30.313678 | orchestrator | 2025-02-04 09:54:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:30.313819 | orchestrator | 2025-02-04 09:54:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:30.314382 | orchestrator | 2025-02-04 09:54:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:33.355650 | orchestrator | 2025-02-04 09:54:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:33.355804 | orchestrator | 2025-02-04 09:54:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:33.356252 | orchestrator | 2025-02-04 09:54:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:36.401344 | orchestrator | 2025-02-04 09:54:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:36.401482 | orchestrator | 2025-02-04 09:54:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:36.401871 | orchestrator | 2025-02-04 09:54:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:39.442311 | orchestrator | 2025-02-04 09:54:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:39.442421 | orchestrator | 2025-02-04 09:54:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:39.443676 | orchestrator | 2025-02-04 09:54:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:42.473975 | orchestrator | 2025-02-04 09:54:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:42.474157 | orchestrator | 2025-02-04 09:54:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:45.517274 | orchestrator | 2025-02-04 09:54:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:45.517423 | orchestrator | 2025-02-04 09:54:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:45.517450 | orchestrator | 2025-02-04 09:54:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:48.553624 | orchestrator | 2025-02-04 09:54:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:48.553758 | orchestrator | 2025-02-04 09:54:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:48.553789 | orchestrator | 2025-02-04 09:54:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:48.554128 | orchestrator | 2025-02-04 09:54:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:51.591285 | orchestrator | 2025-02-04 09:54:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:51.591414 | orchestrator | 2025-02-04 09:54:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:54.627989 | orchestrator | 2025-02-04 09:54:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:54.628114 | orchestrator | 2025-02-04 09:54:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:54.628151 | orchestrator | 2025-02-04 09:54:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:54.630240 | orchestrator | 2025-02-04 09:54:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:54:57.664953 | orchestrator | 2025-02-04 09:54:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:54:57.665119 | orchestrator | 2025-02-04 09:54:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:54:57.665924 | orchestrator | 2025-02-04 09:54:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:00.709444 | orchestrator | 2025-02-04 09:54:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:00.709663 | orchestrator | 2025-02-04 09:55:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:00.710711 | orchestrator | 2025-02-04 09:55:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:03.746945 | orchestrator | 2025-02-04 09:55:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:03.747069 | orchestrator | 2025-02-04 09:55:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:03.748110 | orchestrator | 2025-02-04 09:55:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:06.793755 | orchestrator | 2025-02-04 09:55:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:06.793882 | orchestrator | 2025-02-04 09:55:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:06.795030 | orchestrator | 2025-02-04 09:55:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:09.831868 | orchestrator | 2025-02-04 09:55:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:09.831997 | orchestrator | 2025-02-04 09:55:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:12.864210 | orchestrator | 2025-02-04 09:55:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:12.864327 | orchestrator | 2025-02-04 09:55:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:12.864387 | orchestrator | 2025-02-04 09:55:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:12.866461 | orchestrator | 2025-02-04 09:55:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:15.904628 | orchestrator | 2025-02-04 09:55:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:15.904757 | orchestrator | 2025-02-04 09:55:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:15.905353 | orchestrator | 2025-02-04 09:55:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:18.939573 | orchestrator | 2025-02-04 09:55:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:18.939703 | orchestrator | 2025-02-04 09:55:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:18.940664 | orchestrator | 2025-02-04 09:55:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:21.977176 | orchestrator | 2025-02-04 09:55:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:21.977267 | orchestrator | 2025-02-04 09:55:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:21.978121 | orchestrator | 2025-02-04 09:55:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:25.015582 | orchestrator | 2025-02-04 09:55:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:25.015648 | orchestrator | 2025-02-04 09:55:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:25.016523 | orchestrator | 2025-02-04 09:55:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:28.056610 | orchestrator | 2025-02-04 09:55:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:28.056788 | orchestrator | 2025-02-04 09:55:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:28.060663 | orchestrator | 2025-02-04 09:55:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:31.097138 | orchestrator | 2025-02-04 09:55:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:31.097252 | orchestrator | 2025-02-04 09:55:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:34.133748 | orchestrator | 2025-02-04 09:55:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:34.133831 | orchestrator | 2025-02-04 09:55:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:34.133852 | orchestrator | 2025-02-04 09:55:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:34.137782 | orchestrator | 2025-02-04 09:55:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:37.176728 | orchestrator | 2025-02-04 09:55:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:37.176872 | orchestrator | 2025-02-04 09:55:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:40.205457 | orchestrator | 2025-02-04 09:55:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:40.205625 | orchestrator | 2025-02-04 09:55:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:40.205667 | orchestrator | 2025-02-04 09:55:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:43.232466 | orchestrator | 2025-02-04 09:55:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:43.232632 | orchestrator | 2025-02-04 09:55:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:43.232672 | orchestrator | 2025-02-04 09:55:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:43.232921 | orchestrator | 2025-02-04 09:55:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:46.266602 | orchestrator | 2025-02-04 09:55:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:46.266750 | orchestrator | 2025-02-04 09:55:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:46.267159 | orchestrator | 2025-02-04 09:55:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:49.298440 | orchestrator | 2025-02-04 09:55:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:49.298553 | orchestrator | 2025-02-04 09:55:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:49.299408 | orchestrator | 2025-02-04 09:55:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:52.341741 | orchestrator | 2025-02-04 09:55:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:52.341895 | orchestrator | 2025-02-04 09:55:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:52.343113 | orchestrator | 2025-02-04 09:55:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:55.384963 | orchestrator | 2025-02-04 09:55:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:55.385122 | orchestrator | 2025-02-04 09:55:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:55:55.385878 | orchestrator | 2025-02-04 09:55:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:55:58.419283 | orchestrator | 2025-02-04 09:55:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:55:58.419423 | orchestrator | 2025-02-04 09:55:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:01.450139 | orchestrator | 2025-02-04 09:55:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:01.450282 | orchestrator | 2025-02-04 09:55:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:01.450320 | orchestrator | 2025-02-04 09:56:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:04.473509 | orchestrator | 2025-02-04 09:56:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:04.473690 | orchestrator | 2025-02-04 09:56:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:04.473849 | orchestrator | 2025-02-04 09:56:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:07.507298 | orchestrator | 2025-02-04 09:56:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:07.507488 | orchestrator | 2025-02-04 09:56:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:07.507569 | orchestrator | 2025-02-04 09:56:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:07.507904 | orchestrator | 2025-02-04 09:56:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:10.543675 | orchestrator | 2025-02-04 09:56:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:10.543825 | orchestrator | 2025-02-04 09:56:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:13.569260 | orchestrator | 2025-02-04 09:56:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:13.569383 | orchestrator | 2025-02-04 09:56:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:13.569421 | orchestrator | 2025-02-04 09:56:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:16.595421 | orchestrator | 2025-02-04 09:56:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:16.595633 | orchestrator | 2025-02-04 09:56:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:16.595688 | orchestrator | 2025-02-04 09:56:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:19.620892 | orchestrator | 2025-02-04 09:56:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:19.621017 | orchestrator | 2025-02-04 09:56:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:19.621055 | orchestrator | 2025-02-04 09:56:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:22.646435 | orchestrator | 2025-02-04 09:56:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:22.646594 | orchestrator | 2025-02-04 09:56:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:22.646636 | orchestrator | 2025-02-04 09:56:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:22.647200 | orchestrator | 2025-02-04 09:56:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:25.680969 | orchestrator | 2025-02-04 09:56:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:25.681106 | orchestrator | 2025-02-04 09:56:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:28.723182 | orchestrator | 2025-02-04 09:56:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:28.723284 | orchestrator | 2025-02-04 09:56:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:28.723311 | orchestrator | 2025-02-04 09:56:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:28.723973 | orchestrator | 2025-02-04 09:56:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:31.760084 | orchestrator | 2025-02-04 09:56:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:31.760240 | orchestrator | 2025-02-04 09:56:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:34.791774 | orchestrator | 2025-02-04 09:56:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:34.792164 | orchestrator | 2025-02-04 09:56:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:34.792243 | orchestrator | 2025-02-04 09:56:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:37.817670 | orchestrator | 2025-02-04 09:56:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:37.817790 | orchestrator | 2025-02-04 09:56:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:37.817828 | orchestrator | 2025-02-04 09:56:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:37.818293 | orchestrator | 2025-02-04 09:56:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:40.844900 | orchestrator | 2025-02-04 09:56:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:40.845009 | orchestrator | 2025-02-04 09:56:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:40.845181 | orchestrator | 2025-02-04 09:56:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:43.882322 | orchestrator | 2025-02-04 09:56:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:43.882486 | orchestrator | 2025-02-04 09:56:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:43.883376 | orchestrator | 2025-02-04 09:56:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:46.911711 | orchestrator | 2025-02-04 09:56:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:46.911879 | orchestrator | 2025-02-04 09:56:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:49.940374 | orchestrator | 2025-02-04 09:56:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:49.940596 | orchestrator | 2025-02-04 09:56:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:49.940655 | orchestrator | 2025-02-04 09:56:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:52.967775 | orchestrator | 2025-02-04 09:56:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:52.967895 | orchestrator | 2025-02-04 09:56:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:52.967934 | orchestrator | 2025-02-04 09:56:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:55.994413 | orchestrator | 2025-02-04 09:56:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:55.994600 | orchestrator | 2025-02-04 09:56:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:55.994641 | orchestrator | 2025-02-04 09:56:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:56:59.040935 | orchestrator | 2025-02-04 09:56:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:56:59.041057 | orchestrator | 2025-02-04 09:56:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:56:59.041097 | orchestrator | 2025-02-04 09:56:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:02.065418 | orchestrator | 2025-02-04 09:56:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:02.065611 | orchestrator | 2025-02-04 09:56:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:02.065658 | orchestrator | 2025-02-04 09:57:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:02.066419 | orchestrator | 2025-02-04 09:57:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:05.095956 | orchestrator | 2025-02-04 09:57:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:05.096189 | orchestrator | 2025-02-04 09:57:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:08.126745 | orchestrator | 2025-02-04 09:57:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:08.126865 | orchestrator | 2025-02-04 09:57:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:08.126904 | orchestrator | 2025-02-04 09:57:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:08.127617 | orchestrator | 2025-02-04 09:57:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:11.153865 | orchestrator | 2025-02-04 09:57:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:11.153989 | orchestrator | 2025-02-04 09:57:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:11.154469 | orchestrator | 2025-02-04 09:57:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:14.186639 | orchestrator | 2025-02-04 09:57:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:14.186768 | orchestrator | 2025-02-04 09:57:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:17.220927 | orchestrator | 2025-02-04 09:57:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:17.221048 | orchestrator | 2025-02-04 09:57:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:17.221083 | orchestrator | 2025-02-04 09:57:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:20.253523 | orchestrator | 2025-02-04 09:57:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:20.253650 | orchestrator | 2025-02-04 09:57:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:20.253691 | orchestrator | 2025-02-04 09:57:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:23.281253 | orchestrator | 2025-02-04 09:57:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:23.281406 | orchestrator | 2025-02-04 09:57:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:23.281464 | orchestrator | 2025-02-04 09:57:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:23.282207 | orchestrator | 2025-02-04 09:57:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:26.314254 | orchestrator | 2025-02-04 09:57:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:26.314427 | orchestrator | 2025-02-04 09:57:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:29.348606 | orchestrator | 2025-02-04 09:57:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:29.348744 | orchestrator | 2025-02-04 09:57:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:29.348783 | orchestrator | 2025-02-04 09:57:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:29.349156 | orchestrator | 2025-02-04 09:57:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:32.389147 | orchestrator | 2025-02-04 09:57:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:32.389273 | orchestrator | 2025-02-04 09:57:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:32.389346 | orchestrator | 2025-02-04 09:57:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:35.427434 | orchestrator | 2025-02-04 09:57:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:35.427608 | orchestrator | 2025-02-04 09:57:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:35.429549 | orchestrator | 2025-02-04 09:57:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:38.465452 | orchestrator | 2025-02-04 09:57:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:38.465667 | orchestrator | 2025-02-04 09:57:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:38.467448 | orchestrator | 2025-02-04 09:57:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:41.508642 | orchestrator | 2025-02-04 09:57:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:41.508758 | orchestrator | 2025-02-04 09:57:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:41.509250 | orchestrator | 2025-02-04 09:57:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:44.549765 | orchestrator | 2025-02-04 09:57:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:44.549880 | orchestrator | 2025-02-04 09:57:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:44.552002 | orchestrator | 2025-02-04 09:57:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:47.590683 | orchestrator | 2025-02-04 09:57:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:47.590844 | orchestrator | 2025-02-04 09:57:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:47.591142 | orchestrator | 2025-02-04 09:57:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:50.626099 | orchestrator | 2025-02-04 09:57:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:50.626198 | orchestrator | 2025-02-04 09:57:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:50.626443 | orchestrator | 2025-02-04 09:57:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:53.671323 | orchestrator | 2025-02-04 09:57:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:53.671461 | orchestrator | 2025-02-04 09:57:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:53.672983 | orchestrator | 2025-02-04 09:57:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:56.702095 | orchestrator | 2025-02-04 09:57:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:56.702237 | orchestrator | 2025-02-04 09:57:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:59.734236 | orchestrator | 2025-02-04 09:57:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:57:59.734413 | orchestrator | 2025-02-04 09:57:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:57:59.734455 | orchestrator | 2025-02-04 09:57:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:57:59.734629 | orchestrator | 2025-02-04 09:57:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:02.761968 | orchestrator | 2025-02-04 09:57:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:02.762151 | orchestrator | 2025-02-04 09:58:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:02.762994 | orchestrator | 2025-02-04 09:58:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:05.789117 | orchestrator | 2025-02-04 09:58:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:05.789240 | orchestrator | 2025-02-04 09:58:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:05.789676 | orchestrator | 2025-02-04 09:58:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:08.823909 | orchestrator | 2025-02-04 09:58:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:08.824039 | orchestrator | 2025-02-04 09:58:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:08.824179 | orchestrator | 2025-02-04 09:58:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:08.824319 | orchestrator | 2025-02-04 09:58:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:11.850327 | orchestrator | 2025-02-04 09:58:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:11.850963 | orchestrator | 2025-02-04 09:58:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:14.876244 | orchestrator | 2025-02-04 09:58:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:14.876385 | orchestrator | 2025-02-04 09:58:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:14.876858 | orchestrator | 2025-02-04 09:58:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:17.904531 | orchestrator | 2025-02-04 09:58:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:17.904645 | orchestrator | 2025-02-04 09:58:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:17.905056 | orchestrator | 2025-02-04 09:58:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:20.931656 | orchestrator | 2025-02-04 09:58:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:20.931816 | orchestrator | 2025-02-04 09:58:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:20.932029 | orchestrator | 2025-02-04 09:58:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:23.963870 | orchestrator | 2025-02-04 09:58:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:23.964052 | orchestrator | 2025-02-04 09:58:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:23.965047 | orchestrator | 2025-02-04 09:58:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:23.965147 | orchestrator | 2025-02-04 09:58:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:26.993698 | orchestrator | 2025-02-04 09:58:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:26.994114 | orchestrator | 2025-02-04 09:58:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:30.024631 | orchestrator | 2025-02-04 09:58:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:30.024728 | orchestrator | 2025-02-04 09:58:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:33.054403 | orchestrator | 2025-02-04 09:58:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:33.054618 | orchestrator | 2025-02-04 09:58:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:33.054654 | orchestrator | 2025-02-04 09:58:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:33.054772 | orchestrator | 2025-02-04 09:58:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:36.084046 | orchestrator | 2025-02-04 09:58:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:36.084144 | orchestrator | 2025-02-04 09:58:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:36.084856 | orchestrator | 2025-02-04 09:58:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:39.121965 | orchestrator | 2025-02-04 09:58:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:39.122099 | orchestrator | 2025-02-04 09:58:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:39.122691 | orchestrator | 2025-02-04 09:58:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:42.169773 | orchestrator | 2025-02-04 09:58:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:42.169905 | orchestrator | 2025-02-04 09:58:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:42.171591 | orchestrator | 2025-02-04 09:58:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:45.211990 | orchestrator | 2025-02-04 09:58:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:45.212106 | orchestrator | 2025-02-04 09:58:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:45.212395 | orchestrator | 2025-02-04 09:58:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:48.256072 | orchestrator | 2025-02-04 09:58:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:48.256286 | orchestrator | 2025-02-04 09:58:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:48.256501 | orchestrator | 2025-02-04 09:58:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:51.292617 | orchestrator | 2025-02-04 09:58:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:51.292772 | orchestrator | 2025-02-04 09:58:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:54.325754 | orchestrator | 2025-02-04 09:58:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:54.325875 | orchestrator | 2025-02-04 09:58:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:54.325912 | orchestrator | 2025-02-04 09:58:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:54.326555 | orchestrator | 2025-02-04 09:58:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:58:57.366724 | orchestrator | 2025-02-04 09:58:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:58:57.366846 | orchestrator | 2025-02-04 09:58:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:58:57.367209 | orchestrator | 2025-02-04 09:58:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:00.412534 | orchestrator | 2025-02-04 09:58:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:00.412701 | orchestrator | 2025-02-04 09:59:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:00.413406 | orchestrator | 2025-02-04 09:59:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:03.449822 | orchestrator | 2025-02-04 09:59:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:03.449990 | orchestrator | 2025-02-04 09:59:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:06.476181 | orchestrator | 2025-02-04 09:59:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:06.476304 | orchestrator | 2025-02-04 09:59:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:06.476366 | orchestrator | 2025-02-04 09:59:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:09.514667 | orchestrator | 2025-02-04 09:59:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:09.514776 | orchestrator | 2025-02-04 09:59:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:09.514811 | orchestrator | 2025-02-04 09:59:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:09.516959 | orchestrator | 2025-02-04 09:59:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:12.554579 | orchestrator | 2025-02-04 09:59:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:12.554712 | orchestrator | 2025-02-04 09:59:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:12.555043 | orchestrator | 2025-02-04 09:59:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:15.595763 | orchestrator | 2025-02-04 09:59:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:15.595915 | orchestrator | 2025-02-04 09:59:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:15.596090 | orchestrator | 2025-02-04 09:59:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:18.632386 | orchestrator | 2025-02-04 09:59:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:18.632589 | orchestrator | 2025-02-04 09:59:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:21.676628 | orchestrator | 2025-02-04 09:59:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:21.676754 | orchestrator | 2025-02-04 09:59:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:21.676793 | orchestrator | 2025-02-04 09:59:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:24.719284 | orchestrator | 2025-02-04 09:59:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:24.719403 | orchestrator | 2025-02-04 09:59:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:24.719475 | orchestrator | 2025-02-04 09:59:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:24.720874 | orchestrator | 2025-02-04 09:59:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:27.752111 | orchestrator | 2025-02-04 09:59:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:27.752254 | orchestrator | 2025-02-04 09:59:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:27.752361 | orchestrator | 2025-02-04 09:59:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:30.792847 | orchestrator | 2025-02-04 09:59:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:30.792976 | orchestrator | 2025-02-04 09:59:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:30.793650 | orchestrator | 2025-02-04 09:59:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:33.828233 | orchestrator | 2025-02-04 09:59:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:33.828410 | orchestrator | 2025-02-04 09:59:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:33.828678 | orchestrator | 2025-02-04 09:59:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:36.867324 | orchestrator | 2025-02-04 09:59:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:36.867588 | orchestrator | 2025-02-04 09:59:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:36.867749 | orchestrator | 2025-02-04 09:59:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:39.894819 | orchestrator | 2025-02-04 09:59:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:39.895015 | orchestrator | 2025-02-04 09:59:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:39.895146 | orchestrator | 2025-02-04 09:59:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:39.896187 | orchestrator | 2025-02-04 09:59:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:42.921658 | orchestrator | 2025-02-04 09:59:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:45.962693 | orchestrator | 2025-02-04 09:59:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:45.962819 | orchestrator | 2025-02-04 09:59:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:45.962860 | orchestrator | 2025-02-04 09:59:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:45.962944 | orchestrator | 2025-02-04 09:59:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:48.992647 | orchestrator | 2025-02-04 09:59:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:48.992772 | orchestrator | 2025-02-04 09:59:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:48.993023 | orchestrator | 2025-02-04 09:59:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:52.024270 | orchestrator | 2025-02-04 09:59:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:52.024533 | orchestrator | 2025-02-04 09:59:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:52.024633 | orchestrator | 2025-02-04 09:59:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:55.056473 | orchestrator | 2025-02-04 09:59:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:55.056627 | orchestrator | 2025-02-04 09:59:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:55.057691 | orchestrator | 2025-02-04 09:59:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 09:59:58.099971 | orchestrator | 2025-02-04 09:59:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 09:59:58.100106 | orchestrator | 2025-02-04 09:59:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 09:59:58.100633 | orchestrator | 2025-02-04 09:59:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:01.139932 | orchestrator | 2025-02-04 09:59:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:01.140075 | orchestrator | 2025-02-04 10:00:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:01.140360 | orchestrator | 2025-02-04 10:00:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:04.177101 | orchestrator | 2025-02-04 10:00:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:04.177205 | orchestrator | 2025-02-04 10:00:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:04.177796 | orchestrator | 2025-02-04 10:00:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:07.219594 | orchestrator | 2025-02-04 10:00:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:07.219840 | orchestrator | 2025-02-04 10:00:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:07.219980 | orchestrator | 2025-02-04 10:00:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:10.262924 | orchestrator | 2025-02-04 10:00:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:10.263106 | orchestrator | 2025-02-04 10:00:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:10.264521 | orchestrator | 2025-02-04 10:00:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:13.305679 | orchestrator | 2025-02-04 10:00:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:13.305814 | orchestrator | 2025-02-04 10:00:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:13.306328 | orchestrator | 2025-02-04 10:00:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:16.345632 | orchestrator | 2025-02-04 10:00:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:16.345750 | orchestrator | 2025-02-04 10:00:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:19.377937 | orchestrator | 2025-02-04 10:00:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:19.378194 | orchestrator | 2025-02-04 10:00:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:19.378240 | orchestrator | 2025-02-04 10:00:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:19.378334 | orchestrator | 2025-02-04 10:00:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:22.424279 | orchestrator | 2025-02-04 10:00:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:22.424513 | orchestrator | 2025-02-04 10:00:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:22.426203 | orchestrator | 2025-02-04 10:00:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:25.460321 | orchestrator | 2025-02-04 10:00:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:25.460505 | orchestrator | 2025-02-04 10:00:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:25.462202 | orchestrator | 2025-02-04 10:00:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:28.503904 | orchestrator | 2025-02-04 10:00:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:28.504026 | orchestrator | 2025-02-04 10:00:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:28.505851 | orchestrator | 2025-02-04 10:00:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:31.546295 | orchestrator | 2025-02-04 10:00:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:31.546413 | orchestrator | 2025-02-04 10:00:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:31.547014 | orchestrator | 2025-02-04 10:00:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:34.590726 | orchestrator | 2025-02-04 10:00:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:34.590860 | orchestrator | 2025-02-04 10:00:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:34.591778 | orchestrator | 2025-02-04 10:00:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:37.628942 | orchestrator | 2025-02-04 10:00:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:37.629081 | orchestrator | 2025-02-04 10:00:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:37.630551 | orchestrator | 2025-02-04 10:00:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:40.678962 | orchestrator | 2025-02-04 10:00:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:40.679169 | orchestrator | 2025-02-04 10:00:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:40.679634 | orchestrator | 2025-02-04 10:00:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:43.718517 | orchestrator | 2025-02-04 10:00:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:43.718630 | orchestrator | 2025-02-04 10:00:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:46.767248 | orchestrator | 2025-02-04 10:00:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:46.767348 | orchestrator | 2025-02-04 10:00:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:46.767376 | orchestrator | 2025-02-04 10:00:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:49.797384 | orchestrator | 2025-02-04 10:00:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:49.797508 | orchestrator | 2025-02-04 10:00:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:49.797539 | orchestrator | 2025-02-04 10:00:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:49.797647 | orchestrator | 2025-02-04 10:00:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:52.821154 | orchestrator | 2025-02-04 10:00:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:52.821257 | orchestrator | 2025-02-04 10:00:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:55.849955 | orchestrator | 2025-02-04 10:00:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:55.850100 | orchestrator | 2025-02-04 10:00:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:55.850127 | orchestrator | 2025-02-04 10:00:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:00:55.851396 | orchestrator | 2025-02-04 10:00:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:00:58.884840 | orchestrator | 2025-02-04 10:00:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:00:58.884977 | orchestrator | 2025-02-04 10:00:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:01.914078 | orchestrator | 2025-02-04 10:00:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:01.914213 | orchestrator | 2025-02-04 10:00:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:01.914296 | orchestrator | 2025-02-04 10:01:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:01.914597 | orchestrator | 2025-02-04 10:01:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:04.951658 | orchestrator | 2025-02-04 10:01:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:04.951782 | orchestrator | 2025-02-04 10:01:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:04.952154 | orchestrator | 2025-02-04 10:01:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:07.993716 | orchestrator | 2025-02-04 10:01:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:07.993823 | orchestrator | 2025-02-04 10:01:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:07.994220 | orchestrator | 2025-02-04 10:01:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:07.994302 | orchestrator | 2025-02-04 10:01:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:11.033528 | orchestrator | 2025-02-04 10:01:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:11.033756 | orchestrator | 2025-02-04 10:01:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:14.076335 | orchestrator | 2025-02-04 10:01:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:14.076524 | orchestrator | 2025-02-04 10:01:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:14.077376 | orchestrator | 2025-02-04 10:01:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:17.120159 | orchestrator | 2025-02-04 10:01:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:17.120311 | orchestrator | 2025-02-04 10:01:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:17.120368 | orchestrator | 2025-02-04 10:01:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:20.151255 | orchestrator | 2025-02-04 10:01:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:20.151355 | orchestrator | 2025-02-04 10:01:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:20.151851 | orchestrator | 2025-02-04 10:01:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:23.178317 | orchestrator | 2025-02-04 10:01:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:23.178529 | orchestrator | 2025-02-04 10:01:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:23.178638 | orchestrator | 2025-02-04 10:01:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:26.207140 | orchestrator | 2025-02-04 10:01:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:26.207279 | orchestrator | 2025-02-04 10:01:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:29.237782 | orchestrator | 2025-02-04 10:01:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:29.237904 | orchestrator | 2025-02-04 10:01:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:29.237999 | orchestrator | 2025-02-04 10:01:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:29.238141 | orchestrator | 2025-02-04 10:01:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:32.276953 | orchestrator | 2025-02-04 10:01:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:32.277093 | orchestrator | 2025-02-04 10:01:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:32.277604 | orchestrator | 2025-02-04 10:01:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:32.278637 | orchestrator | 2025-02-04 10:01:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:35.320711 | orchestrator | 2025-02-04 10:01:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:35.321884 | orchestrator | 2025-02-04 10:01:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:38.355911 | orchestrator | 2025-02-04 10:01:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:38.356043 | orchestrator | 2025-02-04 10:01:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:38.357433 | orchestrator | 2025-02-04 10:01:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:41.392836 | orchestrator | 2025-02-04 10:01:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:41.392991 | orchestrator | 2025-02-04 10:01:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:41.393064 | orchestrator | 2025-02-04 10:01:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:44.419721 | orchestrator | 2025-02-04 10:01:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:44.419919 | orchestrator | 2025-02-04 10:01:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:44.420056 | orchestrator | 2025-02-04 10:01:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:47.456916 | orchestrator | 2025-02-04 10:01:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:47.457050 | orchestrator | 2025-02-04 10:01:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:47.457501 | orchestrator | 2025-02-04 10:01:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:50.492530 | orchestrator | 2025-02-04 10:01:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:50.492749 | orchestrator | 2025-02-04 10:01:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:50.492859 | orchestrator | 2025-02-04 10:01:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:53.528630 | orchestrator | 2025-02-04 10:01:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:53.528760 | orchestrator | 2025-02-04 10:01:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:53.530671 | orchestrator | 2025-02-04 10:01:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:56.557938 | orchestrator | 2025-02-04 10:01:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:56.558278 | orchestrator | 2025-02-04 10:01:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:56.558358 | orchestrator | 2025-02-04 10:01:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:01:56.558382 | orchestrator | 2025-02-04 10:01:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:01:59.591868 | orchestrator | 2025-02-04 10:01:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:01:59.593219 | orchestrator | 2025-02-04 10:01:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:02.624598 | orchestrator | 2025-02-04 10:01:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:02.624800 | orchestrator | 2025-02-04 10:02:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:02.624889 | orchestrator | 2025-02-04 10:02:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:05.654723 | orchestrator | 2025-02-04 10:02:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:05.654918 | orchestrator | 2025-02-04 10:02:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:05.655009 | orchestrator | 2025-02-04 10:02:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:08.683812 | orchestrator | 2025-02-04 10:02:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:08.683955 | orchestrator | 2025-02-04 10:02:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:08.684153 | orchestrator | 2025-02-04 10:02:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:11.712769 | orchestrator | 2025-02-04 10:02:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:11.712901 | orchestrator | 2025-02-04 10:02:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:14.740586 | orchestrator | 2025-02-04 10:02:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:14.740702 | orchestrator | 2025-02-04 10:02:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:14.740728 | orchestrator | 2025-02-04 10:02:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:14.740772 | orchestrator | 2025-02-04 10:02:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:14.740786 | orchestrator | 2025-02-04 10:02:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:17.781732 | orchestrator | 2025-02-04 10:02:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:17.782844 | orchestrator | 2025-02-04 10:02:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:20.813275 | orchestrator | 2025-02-04 10:02:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:20.813465 | orchestrator | 2025-02-04 10:02:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:20.813818 | orchestrator | 2025-02-04 10:02:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:23.843551 | orchestrator | 2025-02-04 10:02:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:23.843725 | orchestrator | 2025-02-04 10:02:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:23.843983 | orchestrator | 2025-02-04 10:02:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:26.878682 | orchestrator | 2025-02-04 10:02:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:26.878891 | orchestrator | 2025-02-04 10:02:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:26.878984 | orchestrator | 2025-02-04 10:02:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:29.914245 | orchestrator | 2025-02-04 10:02:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:29.914377 | orchestrator | 2025-02-04 10:02:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:29.914546 | orchestrator | 2025-02-04 10:02:29 | INFO  | Task 20ff2eb0-436a-43ab-90d0-d13860684b6a is in state STARTED 2025-02-04 10:02:29.915771 | orchestrator | 2025-02-04 10:02:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:32.964342 | orchestrator | 2025-02-04 10:02:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:32.964483 | orchestrator | 2025-02-04 10:02:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:32.965833 | orchestrator | 2025-02-04 10:02:32 | INFO  | Task 20ff2eb0-436a-43ab-90d0-d13860684b6a is in state STARTED 2025-02-04 10:02:32.965875 | orchestrator | 2025-02-04 10:02:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:32.965932 | orchestrator | 2025-02-04 10:02:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:36.001257 | orchestrator | 2025-02-04 10:02:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:36.001523 | orchestrator | 2025-02-04 10:02:35 | INFO  | Task 20ff2eb0-436a-43ab-90d0-d13860684b6a is in state STARTED 2025-02-04 10:02:36.002327 | orchestrator | 2025-02-04 10:02:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:39.052044 | orchestrator | 2025-02-04 10:02:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:39.052198 | orchestrator | 2025-02-04 10:02:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:39.052586 | orchestrator | 2025-02-04 10:02:39 | INFO  | Task 20ff2eb0-436a-43ab-90d0-d13860684b6a is in state STARTED 2025-02-04 10:02:39.053005 | orchestrator | 2025-02-04 10:02:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:39.053135 | orchestrator | 2025-02-04 10:02:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:42.085602 | orchestrator | 2025-02-04 10:02:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:42.086560 | orchestrator | 2025-02-04 10:02:42 | INFO  | Task 20ff2eb0-436a-43ab-90d0-d13860684b6a is in state SUCCESS 2025-02-04 10:02:42.086619 | orchestrator | 2025-02-04 10:02:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:45.117479 | orchestrator | 2025-02-04 10:02:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:45.117612 | orchestrator | 2025-02-04 10:02:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:45.118286 | orchestrator | 2025-02-04 10:02:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:48.160184 | orchestrator | 2025-02-04 10:02:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:48.160376 | orchestrator | 2025-02-04 10:02:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:48.160524 | orchestrator | 2025-02-04 10:02:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:51.202933 | orchestrator | 2025-02-04 10:02:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:51.203069 | orchestrator | 2025-02-04 10:02:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:51.203449 | orchestrator | 2025-02-04 10:02:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:54.238748 | orchestrator | 2025-02-04 10:02:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:54.238880 | orchestrator | 2025-02-04 10:02:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:54.239543 | orchestrator | 2025-02-04 10:02:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:02:57.285926 | orchestrator | 2025-02-04 10:02:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:02:57.286125 | orchestrator | 2025-02-04 10:02:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:02:57.286721 | orchestrator | 2025-02-04 10:02:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:00.328741 | orchestrator | 2025-02-04 10:02:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:00.328868 | orchestrator | 2025-02-04 10:03:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:00.329545 | orchestrator | 2025-02-04 10:03:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:03.373464 | orchestrator | 2025-02-04 10:03:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:03.373597 | orchestrator | 2025-02-04 10:03:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:03.374606 | orchestrator | 2025-02-04 10:03:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:06.416603 | orchestrator | 2025-02-04 10:03:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:06.416721 | orchestrator | 2025-02-04 10:03:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:06.417567 | orchestrator | 2025-02-04 10:03:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:09.455170 | orchestrator | 2025-02-04 10:03:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:09.455329 | orchestrator | 2025-02-04 10:03:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:09.456120 | orchestrator | 2025-02-04 10:03:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:12.511188 | orchestrator | 2025-02-04 10:03:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:12.511287 | orchestrator | 2025-02-04 10:03:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:12.511567 | orchestrator | 2025-02-04 10:03:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:15.536534 | orchestrator | 2025-02-04 10:03:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:15.536664 | orchestrator | 2025-02-04 10:03:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:15.537539 | orchestrator | 2025-02-04 10:03:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:18.563148 | orchestrator | 2025-02-04 10:03:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:18.563279 | orchestrator | 2025-02-04 10:03:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:18.563944 | orchestrator | 2025-02-04 10:03:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:21.600830 | orchestrator | 2025-02-04 10:03:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:21.600962 | orchestrator | 2025-02-04 10:03:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:21.601565 | orchestrator | 2025-02-04 10:03:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:24.640776 | orchestrator | 2025-02-04 10:03:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:24.640866 | orchestrator | 2025-02-04 10:03:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:24.641204 | orchestrator | 2025-02-04 10:03:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:27.689326 | orchestrator | 2025-02-04 10:03:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:27.689497 | orchestrator | 2025-02-04 10:03:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:27.689809 | orchestrator | 2025-02-04 10:03:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:30.724044 | orchestrator | 2025-02-04 10:03:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:30.724175 | orchestrator | 2025-02-04 10:03:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:30.724603 | orchestrator | 2025-02-04 10:03:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:30.724679 | orchestrator | 2025-02-04 10:03:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:33.765861 | orchestrator | 2025-02-04 10:03:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:36.809974 | orchestrator | 2025-02-04 10:03:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:36.810190 | orchestrator | 2025-02-04 10:03:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:36.810233 | orchestrator | 2025-02-04 10:03:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:36.811229 | orchestrator | 2025-02-04 10:03:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:39.846411 | orchestrator | 2025-02-04 10:03:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:39.846517 | orchestrator | 2025-02-04 10:03:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:39.846932 | orchestrator | 2025-02-04 10:03:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:39.847175 | orchestrator | 2025-02-04 10:03:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:42.885836 | orchestrator | 2025-02-04 10:03:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:42.886093 | orchestrator | 2025-02-04 10:03:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:42.886131 | orchestrator | 2025-02-04 10:03:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:45.926559 | orchestrator | 2025-02-04 10:03:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:45.927564 | orchestrator | 2025-02-04 10:03:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:48.978463 | orchestrator | 2025-02-04 10:03:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:48.978595 | orchestrator | 2025-02-04 10:03:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:48.979692 | orchestrator | 2025-02-04 10:03:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:52.016994 | orchestrator | 2025-02-04 10:03:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:52.017088 | orchestrator | 2025-02-04 10:03:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:55.048205 | orchestrator | 2025-02-04 10:03:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:55.048313 | orchestrator | 2025-02-04 10:03:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:55.048346 | orchestrator | 2025-02-04 10:03:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:03:55.048711 | orchestrator | 2025-02-04 10:03:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:03:58.077049 | orchestrator | 2025-02-04 10:03:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:03:58.077156 | orchestrator | 2025-02-04 10:03:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:01.106759 | orchestrator | 2025-02-04 10:03:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:01.106879 | orchestrator | 2025-02-04 10:03:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:01.106943 | orchestrator | 2025-02-04 10:04:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:01.107028 | orchestrator | 2025-02-04 10:04:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:04.142955 | orchestrator | 2025-02-04 10:04:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:04.143081 | orchestrator | 2025-02-04 10:04:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:04.143699 | orchestrator | 2025-02-04 10:04:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:07.183726 | orchestrator | 2025-02-04 10:04:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:07.183855 | orchestrator | 2025-02-04 10:04:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:07.184841 | orchestrator | 2025-02-04 10:04:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:10.223980 | orchestrator | 2025-02-04 10:04:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:10.224121 | orchestrator | 2025-02-04 10:04:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:10.224892 | orchestrator | 2025-02-04 10:04:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:13.262221 | orchestrator | 2025-02-04 10:04:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:13.262488 | orchestrator | 2025-02-04 10:04:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:13.263892 | orchestrator | 2025-02-04 10:04:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:16.305474 | orchestrator | 2025-02-04 10:04:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:16.305601 | orchestrator | 2025-02-04 10:04:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:16.306186 | orchestrator | 2025-02-04 10:04:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:19.361693 | orchestrator | 2025-02-04 10:04:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:19.361838 | orchestrator | 2025-02-04 10:04:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:19.363735 | orchestrator | 2025-02-04 10:04:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:22.406939 | orchestrator | 2025-02-04 10:04:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:22.407068 | orchestrator | 2025-02-04 10:04:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:22.407566 | orchestrator | 2025-02-04 10:04:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:25.445363 | orchestrator | 2025-02-04 10:04:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:25.445536 | orchestrator | 2025-02-04 10:04:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:25.446540 | orchestrator | 2025-02-04 10:04:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:28.478793 | orchestrator | 2025-02-04 10:04:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:28.478927 | orchestrator | 2025-02-04 10:04:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:28.479083 | orchestrator | 2025-02-04 10:04:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:31.505729 | orchestrator | 2025-02-04 10:04:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:31.505876 | orchestrator | 2025-02-04 10:04:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:34.544749 | orchestrator | 2025-02-04 10:04:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:34.544918 | orchestrator | 2025-02-04 10:04:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:34.544975 | orchestrator | 2025-02-04 10:04:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:34.545661 | orchestrator | 2025-02-04 10:04:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:37.581922 | orchestrator | 2025-02-04 10:04:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:37.582104 | orchestrator | 2025-02-04 10:04:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:37.583421 | orchestrator | 2025-02-04 10:04:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:40.622058 | orchestrator | 2025-02-04 10:04:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:40.622186 | orchestrator | 2025-02-04 10:04:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:40.623640 | orchestrator | 2025-02-04 10:04:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:43.668966 | orchestrator | 2025-02-04 10:04:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:43.669062 | orchestrator | 2025-02-04 10:04:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:43.670216 | orchestrator | 2025-02-04 10:04:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:46.714237 | orchestrator | 2025-02-04 10:04:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:46.714344 | orchestrator | 2025-02-04 10:04:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:49.748303 | orchestrator | 2025-02-04 10:04:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:49.748474 | orchestrator | 2025-02-04 10:04:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:49.748510 | orchestrator | 2025-02-04 10:04:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:49.748783 | orchestrator | 2025-02-04 10:04:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:52.781290 | orchestrator | 2025-02-04 10:04:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:52.781612 | orchestrator | 2025-02-04 10:04:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:52.781782 | orchestrator | 2025-02-04 10:04:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:55.815807 | orchestrator | 2025-02-04 10:04:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:55.816031 | orchestrator | 2025-02-04 10:04:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:55.816150 | orchestrator | 2025-02-04 10:04:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:04:58.851027 | orchestrator | 2025-02-04 10:04:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:04:58.851183 | orchestrator | 2025-02-04 10:04:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:04:58.852787 | orchestrator | 2025-02-04 10:04:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:01.894760 | orchestrator | 2025-02-04 10:04:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:01.894861 | orchestrator | 2025-02-04 10:05:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:01.895565 | orchestrator | 2025-02-04 10:05:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:04.923275 | orchestrator | 2025-02-04 10:05:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:04.923435 | orchestrator | 2025-02-04 10:05:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:04.924308 | orchestrator | 2025-02-04 10:05:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:07.961448 | orchestrator | 2025-02-04 10:05:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:07.961570 | orchestrator | 2025-02-04 10:05:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:07.962409 | orchestrator | 2025-02-04 10:05:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:11.006335 | orchestrator | 2025-02-04 10:05:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:11.006518 | orchestrator | 2025-02-04 10:05:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:11.006864 | orchestrator | 2025-02-04 10:05:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:14.051757 | orchestrator | 2025-02-04 10:05:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:14.051906 | orchestrator | 2025-02-04 10:05:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:14.052098 | orchestrator | 2025-02-04 10:05:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:17.093109 | orchestrator | 2025-02-04 10:05:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:17.093222 | orchestrator | 2025-02-04 10:05:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:17.094957 | orchestrator | 2025-02-04 10:05:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:20.131743 | orchestrator | 2025-02-04 10:05:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:20.131826 | orchestrator | 2025-02-04 10:05:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:20.133784 | orchestrator | 2025-02-04 10:05:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:23.175698 | orchestrator | 2025-02-04 10:05:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:23.175797 | orchestrator | 2025-02-04 10:05:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:23.177838 | orchestrator | 2025-02-04 10:05:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:26.212078 | orchestrator | 2025-02-04 10:05:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:26.212157 | orchestrator | 2025-02-04 10:05:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:26.214890 | orchestrator | 2025-02-04 10:05:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:29.258932 | orchestrator | 2025-02-04 10:05:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:29.259103 | orchestrator | 2025-02-04 10:05:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:29.259785 | orchestrator | 2025-02-04 10:05:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:32.303088 | orchestrator | 2025-02-04 10:05:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:32.303231 | orchestrator | 2025-02-04 10:05:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:32.303527 | orchestrator | 2025-02-04 10:05:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:35.343477 | orchestrator | 2025-02-04 10:05:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:35.343591 | orchestrator | 2025-02-04 10:05:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:35.344066 | orchestrator | 2025-02-04 10:05:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:38.392680 | orchestrator | 2025-02-04 10:05:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:38.392827 | orchestrator | 2025-02-04 10:05:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:38.393712 | orchestrator | 2025-02-04 10:05:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:41.435251 | orchestrator | 2025-02-04 10:05:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:41.435434 | orchestrator | 2025-02-04 10:05:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:41.437261 | orchestrator | 2025-02-04 10:05:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:44.462607 | orchestrator | 2025-02-04 10:05:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:44.462741 | orchestrator | 2025-02-04 10:05:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:44.463297 | orchestrator | 2025-02-04 10:05:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:47.493976 | orchestrator | 2025-02-04 10:05:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:47.494207 | orchestrator | 2025-02-04 10:05:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:50.529990 | orchestrator | 2025-02-04 10:05:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:50.530207 | orchestrator | 2025-02-04 10:05:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:50.530251 | orchestrator | 2025-02-04 10:05:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:53.571457 | orchestrator | 2025-02-04 10:05:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:53.571589 | orchestrator | 2025-02-04 10:05:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:53.571628 | orchestrator | 2025-02-04 10:05:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:53.572261 | orchestrator | 2025-02-04 10:05:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:56.618793 | orchestrator | 2025-02-04 10:05:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:56.618889 | orchestrator | 2025-02-04 10:05:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:56.620487 | orchestrator | 2025-02-04 10:05:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:05:59.661185 | orchestrator | 2025-02-04 10:05:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:05:59.662231 | orchestrator | 2025-02-04 10:05:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:05:59.662607 | orchestrator | 2025-02-04 10:05:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:02.689155 | orchestrator | 2025-02-04 10:05:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:02.689326 | orchestrator | 2025-02-04 10:06:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:02.689618 | orchestrator | 2025-02-04 10:06:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:05.730314 | orchestrator | 2025-02-04 10:06:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:05.730575 | orchestrator | 2025-02-04 10:06:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:05.730654 | orchestrator | 2025-02-04 10:06:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:08.776062 | orchestrator | 2025-02-04 10:06:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:08.776158 | orchestrator | 2025-02-04 10:06:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:08.777545 | orchestrator | 2025-02-04 10:06:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:11.827845 | orchestrator | 2025-02-04 10:06:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:11.828041 | orchestrator | 2025-02-04 10:06:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:11.828566 | orchestrator | 2025-02-04 10:06:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:14.869814 | orchestrator | 2025-02-04 10:06:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:14.870003 | orchestrator | 2025-02-04 10:06:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:14.870149 | orchestrator | 2025-02-04 10:06:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:17.908211 | orchestrator | 2025-02-04 10:06:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:17.908417 | orchestrator | 2025-02-04 10:06:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:17.908508 | orchestrator | 2025-02-04 10:06:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:20.937838 | orchestrator | 2025-02-04 10:06:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:20.937976 | orchestrator | 2025-02-04 10:06:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:23.962898 | orchestrator | 2025-02-04 10:06:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:23.963014 | orchestrator | 2025-02-04 10:06:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:23.963051 | orchestrator | 2025-02-04 10:06:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:27.003624 | orchestrator | 2025-02-04 10:06:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:27.003749 | orchestrator | 2025-02-04 10:06:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:27.003797 | orchestrator | 2025-02-04 10:06:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:27.005930 | orchestrator | 2025-02-04 10:06:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:30.047766 | orchestrator | 2025-02-04 10:06:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:30.047905 | orchestrator | 2025-02-04 10:06:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:30.049185 | orchestrator | 2025-02-04 10:06:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:33.084112 | orchestrator | 2025-02-04 10:06:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:33.084298 | orchestrator | 2025-02-04 10:06:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:33.084455 | orchestrator | 2025-02-04 10:06:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:36.121479 | orchestrator | 2025-02-04 10:06:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:36.121678 | orchestrator | 2025-02-04 10:06:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:36.122673 | orchestrator | 2025-02-04 10:06:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:39.166626 | orchestrator | 2025-02-04 10:06:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:39.166733 | orchestrator | 2025-02-04 10:06:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:39.167157 | orchestrator | 2025-02-04 10:06:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:39.167260 | orchestrator | 2025-02-04 10:06:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:42.217149 | orchestrator | 2025-02-04 10:06:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:42.217919 | orchestrator | 2025-02-04 10:06:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:45.261608 | orchestrator | 2025-02-04 10:06:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:45.261703 | orchestrator | 2025-02-04 10:06:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:45.263031 | orchestrator | 2025-02-04 10:06:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:48.299881 | orchestrator | 2025-02-04 10:06:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:48.300055 | orchestrator | 2025-02-04 10:06:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:51.329422 | orchestrator | 2025-02-04 10:06:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:51.329546 | orchestrator | 2025-02-04 10:06:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:51.329583 | orchestrator | 2025-02-04 10:06:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:54.356229 | orchestrator | 2025-02-04 10:06:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:54.356420 | orchestrator | 2025-02-04 10:06:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:54.356462 | orchestrator | 2025-02-04 10:06:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:54.356539 | orchestrator | 2025-02-04 10:06:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:57.392147 | orchestrator | 2025-02-04 10:06:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:06:57.392325 | orchestrator | 2025-02-04 10:06:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:06:57.392535 | orchestrator | 2025-02-04 10:06:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:06:57.393003 | orchestrator | 2025-02-04 10:06:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:00.433587 | orchestrator | 2025-02-04 10:07:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:03.464057 | orchestrator | 2025-02-04 10:07:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:03.464191 | orchestrator | 2025-02-04 10:07:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:03.464230 | orchestrator | 2025-02-04 10:07:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:03.464674 | orchestrator | 2025-02-04 10:07:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:06.510442 | orchestrator | 2025-02-04 10:07:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:06.510582 | orchestrator | 2025-02-04 10:07:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:06.512481 | orchestrator | 2025-02-04 10:07:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:09.553384 | orchestrator | 2025-02-04 10:07:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:09.553568 | orchestrator | 2025-02-04 10:07:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:12.590301 | orchestrator | 2025-02-04 10:07:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:12.590481 | orchestrator | 2025-02-04 10:07:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:12.590522 | orchestrator | 2025-02-04 10:07:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:12.591899 | orchestrator | 2025-02-04 10:07:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:15.627733 | orchestrator | 2025-02-04 10:07:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:15.627882 | orchestrator | 2025-02-04 10:07:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:15.631169 | orchestrator | 2025-02-04 10:07:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:15.631429 | orchestrator | 2025-02-04 10:07:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:18.673581 | orchestrator | 2025-02-04 10:07:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:18.673833 | orchestrator | 2025-02-04 10:07:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:21.712559 | orchestrator | 2025-02-04 10:07:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:21.712778 | orchestrator | 2025-02-04 10:07:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:21.712960 | orchestrator | 2025-02-04 10:07:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:24.753536 | orchestrator | 2025-02-04 10:07:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:24.753664 | orchestrator | 2025-02-04 10:07:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:24.753937 | orchestrator | 2025-02-04 10:07:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:27.797029 | orchestrator | 2025-02-04 10:07:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:27.797194 | orchestrator | 2025-02-04 10:07:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:27.797411 | orchestrator | 2025-02-04 10:07:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:30.830484 | orchestrator | 2025-02-04 10:07:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:30.830615 | orchestrator | 2025-02-04 10:07:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:33.864837 | orchestrator | 2025-02-04 10:07:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:33.864961 | orchestrator | 2025-02-04 10:07:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:33.864999 | orchestrator | 2025-02-04 10:07:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:33.865856 | orchestrator | 2025-02-04 10:07:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:36.910409 | orchestrator | 2025-02-04 10:07:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:36.910557 | orchestrator | 2025-02-04 10:07:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:36.910727 | orchestrator | 2025-02-04 10:07:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:39.950317 | orchestrator | 2025-02-04 10:07:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:39.950503 | orchestrator | 2025-02-04 10:07:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:39.951702 | orchestrator | 2025-02-04 10:07:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:42.990510 | orchestrator | 2025-02-04 10:07:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:42.990655 | orchestrator | 2025-02-04 10:07:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:42.991234 | orchestrator | 2025-02-04 10:07:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:46.035211 | orchestrator | 2025-02-04 10:07:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:46.035411 | orchestrator | 2025-02-04 10:07:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:46.036364 | orchestrator | 2025-02-04 10:07:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:49.084778 | orchestrator | 2025-02-04 10:07:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:49.084917 | orchestrator | 2025-02-04 10:07:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:49.085600 | orchestrator | 2025-02-04 10:07:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:52.117581 | orchestrator | 2025-02-04 10:07:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:52.117738 | orchestrator | 2025-02-04 10:07:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:52.118775 | orchestrator | 2025-02-04 10:07:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:55.152132 | orchestrator | 2025-02-04 10:07:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:55.152243 | orchestrator | 2025-02-04 10:07:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:55.153355 | orchestrator | 2025-02-04 10:07:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:07:58.192024 | orchestrator | 2025-02-04 10:07:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:07:58.192149 | orchestrator | 2025-02-04 10:07:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:07:58.193610 | orchestrator | 2025-02-04 10:07:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:01.237882 | orchestrator | 2025-02-04 10:07:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:01.238084 | orchestrator | 2025-02-04 10:08:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:01.239018 | orchestrator | 2025-02-04 10:08:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:04.284529 | orchestrator | 2025-02-04 10:08:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:04.284748 | orchestrator | 2025-02-04 10:08:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:04.284851 | orchestrator | 2025-02-04 10:08:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:07.316439 | orchestrator | 2025-02-04 10:08:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:07.316591 | orchestrator | 2025-02-04 10:08:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:10.341497 | orchestrator | 2025-02-04 10:08:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:10.341617 | orchestrator | 2025-02-04 10:08:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:10.341656 | orchestrator | 2025-02-04 10:08:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:10.342121 | orchestrator | 2025-02-04 10:08:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:13.375394 | orchestrator | 2025-02-04 10:08:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:13.375546 | orchestrator | 2025-02-04 10:08:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:16.416716 | orchestrator | 2025-02-04 10:08:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:16.416814 | orchestrator | 2025-02-04 10:08:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:16.416841 | orchestrator | 2025-02-04 10:08:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:19.454702 | orchestrator | 2025-02-04 10:08:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:19.454829 | orchestrator | 2025-02-04 10:08:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:19.454868 | orchestrator | 2025-02-04 10:08:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:19.455741 | orchestrator | 2025-02-04 10:08:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:22.497283 | orchestrator | 2025-02-04 10:08:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:22.497437 | orchestrator | 2025-02-04 10:08:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:25.540709 | orchestrator | 2025-02-04 10:08:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:25.540838 | orchestrator | 2025-02-04 10:08:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:25.540906 | orchestrator | 2025-02-04 10:08:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:25.541206 | orchestrator | 2025-02-04 10:08:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:28.573779 | orchestrator | 2025-02-04 10:08:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:28.573955 | orchestrator | 2025-02-04 10:08:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:31.610657 | orchestrator | 2025-02-04 10:08:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:31.611578 | orchestrator | 2025-02-04 10:08:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:31.611634 | orchestrator | 2025-02-04 10:08:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:34.652272 | orchestrator | 2025-02-04 10:08:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:34.652420 | orchestrator | 2025-02-04 10:08:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:34.652460 | orchestrator | 2025-02-04 10:08:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:34.653938 | orchestrator | 2025-02-04 10:08:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:37.698745 | orchestrator | 2025-02-04 10:08:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:37.698929 | orchestrator | 2025-02-04 10:08:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:37.701088 | orchestrator | 2025-02-04 10:08:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:40.748498 | orchestrator | 2025-02-04 10:08:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:40.748647 | orchestrator | 2025-02-04 10:08:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:40.749536 | orchestrator | 2025-02-04 10:08:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:40.749638 | orchestrator | 2025-02-04 10:08:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:43.779539 | orchestrator | 2025-02-04 10:08:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:43.779883 | orchestrator | 2025-02-04 10:08:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:46.819462 | orchestrator | 2025-02-04 10:08:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:46.819615 | orchestrator | 2025-02-04 10:08:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:46.819715 | orchestrator | 2025-02-04 10:08:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:49.856795 | orchestrator | 2025-02-04 10:08:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:49.856943 | orchestrator | 2025-02-04 10:08:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:49.858094 | orchestrator | 2025-02-04 10:08:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:52.906888 | orchestrator | 2025-02-04 10:08:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:52.907056 | orchestrator | 2025-02-04 10:08:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:52.908599 | orchestrator | 2025-02-04 10:08:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:55.963099 | orchestrator | 2025-02-04 10:08:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:55.963246 | orchestrator | 2025-02-04 10:08:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:59.009479 | orchestrator | 2025-02-04 10:08:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:08:59.009602 | orchestrator | 2025-02-04 10:08:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:08:59.009639 | orchestrator | 2025-02-04 10:08:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:08:59.011669 | orchestrator | 2025-02-04 10:08:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:02.053654 | orchestrator | 2025-02-04 10:08:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:02.053798 | orchestrator | 2025-02-04 10:09:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:02.056192 | orchestrator | 2025-02-04 10:09:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:05.095962 | orchestrator | 2025-02-04 10:09:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:05.096104 | orchestrator | 2025-02-04 10:09:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:05.097007 | orchestrator | 2025-02-04 10:09:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:08.135385 | orchestrator | 2025-02-04 10:09:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:08.135535 | orchestrator | 2025-02-04 10:09:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:08.135618 | orchestrator | 2025-02-04 10:09:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:08.135642 | orchestrator | 2025-02-04 10:09:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:11.176614 | orchestrator | 2025-02-04 10:09:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:11.176769 | orchestrator | 2025-02-04 10:09:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:14.208788 | orchestrator | 2025-02-04 10:09:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:14.208921 | orchestrator | 2025-02-04 10:09:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:14.209450 | orchestrator | 2025-02-04 10:09:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:17.243484 | orchestrator | 2025-02-04 10:09:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:17.243617 | orchestrator | 2025-02-04 10:09:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:17.245647 | orchestrator | 2025-02-04 10:09:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:20.282750 | orchestrator | 2025-02-04 10:09:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:20.282897 | orchestrator | 2025-02-04 10:09:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:20.283659 | orchestrator | 2025-02-04 10:09:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:23.324157 | orchestrator | 2025-02-04 10:09:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:23.324278 | orchestrator | 2025-02-04 10:09:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:23.326176 | orchestrator | 2025-02-04 10:09:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:26.371362 | orchestrator | 2025-02-04 10:09:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:26.371466 | orchestrator | 2025-02-04 10:09:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:26.371992 | orchestrator | 2025-02-04 10:09:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:29.425207 | orchestrator | 2025-02-04 10:09:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:29.425412 | orchestrator | 2025-02-04 10:09:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:29.426414 | orchestrator | 2025-02-04 10:09:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:32.456090 | orchestrator | 2025-02-04 10:09:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:32.456237 | orchestrator | 2025-02-04 10:09:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:32.457552 | orchestrator | 2025-02-04 10:09:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:35.505811 | orchestrator | 2025-02-04 10:09:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:35.505952 | orchestrator | 2025-02-04 10:09:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:35.506106 | orchestrator | 2025-02-04 10:09:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:38.551865 | orchestrator | 2025-02-04 10:09:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:38.551969 | orchestrator | 2025-02-04 10:09:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:38.552692 | orchestrator | 2025-02-04 10:09:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:38.552836 | orchestrator | 2025-02-04 10:09:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:41.589560 | orchestrator | 2025-02-04 10:09:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:41.590191 | orchestrator | 2025-02-04 10:09:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:44.634852 | orchestrator | 2025-02-04 10:09:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:44.634964 | orchestrator | 2025-02-04 10:09:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:44.636579 | orchestrator | 2025-02-04 10:09:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:44.636633 | orchestrator | 2025-02-04 10:09:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:47.674531 | orchestrator | 2025-02-04 10:09:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:47.675054 | orchestrator | 2025-02-04 10:09:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:50.719463 | orchestrator | 2025-02-04 10:09:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:50.719559 | orchestrator | 2025-02-04 10:09:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:50.720698 | orchestrator | 2025-02-04 10:09:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:53.754852 | orchestrator | 2025-02-04 10:09:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:53.754925 | orchestrator | 2025-02-04 10:09:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:53.756700 | orchestrator | 2025-02-04 10:09:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:56.789613 | orchestrator | 2025-02-04 10:09:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:56.789692 | orchestrator | 2025-02-04 10:09:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:59.821129 | orchestrator | 2025-02-04 10:09:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:09:59.821256 | orchestrator | 2025-02-04 10:09:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:09:59.821292 | orchestrator | 2025-02-04 10:09:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:09:59.821587 | orchestrator | 2025-02-04 10:09:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:02.873261 | orchestrator | 2025-02-04 10:09:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:02.873418 | orchestrator | 2025-02-04 10:10:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:02.875942 | orchestrator | 2025-02-04 10:10:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:05.916757 | orchestrator | 2025-02-04 10:10:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:05.916925 | orchestrator | 2025-02-04 10:10:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:05.917371 | orchestrator | 2025-02-04 10:10:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:08.964135 | orchestrator | 2025-02-04 10:10:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:08.964282 | orchestrator | 2025-02-04 10:10:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:08.965569 | orchestrator | 2025-02-04 10:10:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:12.017398 | orchestrator | 2025-02-04 10:10:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:12.017551 | orchestrator | 2025-02-04 10:10:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:12.017625 | orchestrator | 2025-02-04 10:10:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:12.017845 | orchestrator | 2025-02-04 10:10:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:15.058458 | orchestrator | 2025-02-04 10:10:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:15.059332 | orchestrator | 2025-02-04 10:10:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:18.093892 | orchestrator | 2025-02-04 10:10:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:18.094106 | orchestrator | 2025-02-04 10:10:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:21.137199 | orchestrator | 2025-02-04 10:10:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:21.137420 | orchestrator | 2025-02-04 10:10:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:21.137483 | orchestrator | 2025-02-04 10:10:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:24.175096 | orchestrator | 2025-02-04 10:10:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:24.175225 | orchestrator | 2025-02-04 10:10:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:24.175411 | orchestrator | 2025-02-04 10:10:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:24.176671 | orchestrator | 2025-02-04 10:10:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:27.218935 | orchestrator | 2025-02-04 10:10:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:27.219056 | orchestrator | 2025-02-04 10:10:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:27.219526 | orchestrator | 2025-02-04 10:10:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:30.266192 | orchestrator | 2025-02-04 10:10:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:30.266371 | orchestrator | 2025-02-04 10:10:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:30.266949 | orchestrator | 2025-02-04 10:10:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:33.306975 | orchestrator | 2025-02-04 10:10:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:33.307144 | orchestrator | 2025-02-04 10:10:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:33.307432 | orchestrator | 2025-02-04 10:10:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:36.338415 | orchestrator | 2025-02-04 10:10:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:36.338579 | orchestrator | 2025-02-04 10:10:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:36.339023 | orchestrator | 2025-02-04 10:10:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:39.383221 | orchestrator | 2025-02-04 10:10:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:39.383384 | orchestrator | 2025-02-04 10:10:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:39.384371 | orchestrator | 2025-02-04 10:10:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:42.427851 | orchestrator | 2025-02-04 10:10:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:42.427999 | orchestrator | 2025-02-04 10:10:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:45.464936 | orchestrator | 2025-02-04 10:10:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:45.465044 | orchestrator | 2025-02-04 10:10:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:45.465074 | orchestrator | 2025-02-04 10:10:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:45.466103 | orchestrator | 2025-02-04 10:10:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:48.514419 | orchestrator | 2025-02-04 10:10:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:48.514558 | orchestrator | 2025-02-04 10:10:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:48.517570 | orchestrator | 2025-02-04 10:10:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:51.558552 | orchestrator | 2025-02-04 10:10:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:51.558700 | orchestrator | 2025-02-04 10:10:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:54.601852 | orchestrator | 2025-02-04 10:10:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:54.602003 | orchestrator | 2025-02-04 10:10:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:54.602093 | orchestrator | 2025-02-04 10:10:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:10:57.651641 | orchestrator | 2025-02-04 10:10:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:10:57.651765 | orchestrator | 2025-02-04 10:10:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:10:57.651810 | orchestrator | 2025-02-04 10:10:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:00.702365 | orchestrator | 2025-02-04 10:10:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:00.702493 | orchestrator | 2025-02-04 10:10:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:00.702532 | orchestrator | 2025-02-04 10:11:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:03.735434 | orchestrator | 2025-02-04 10:11:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:03.735526 | orchestrator | 2025-02-04 10:11:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:03.735552 | orchestrator | 2025-02-04 10:11:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:03.735754 | orchestrator | 2025-02-04 10:11:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:06.771274 | orchestrator | 2025-02-04 10:11:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:06.771400 | orchestrator | 2025-02-04 10:11:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:06.772555 | orchestrator | 2025-02-04 10:11:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:06.773577 | orchestrator | 2025-02-04 10:11:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:09.811672 | orchestrator | 2025-02-04 10:11:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:09.812059 | orchestrator | 2025-02-04 10:11:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:12.851797 | orchestrator | 2025-02-04 10:11:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:12.851925 | orchestrator | 2025-02-04 10:11:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:12.853069 | orchestrator | 2025-02-04 10:11:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:15.899695 | orchestrator | 2025-02-04 10:11:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:15.899803 | orchestrator | 2025-02-04 10:11:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:15.900104 | orchestrator | 2025-02-04 10:11:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:18.946126 | orchestrator | 2025-02-04 10:11:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:18.946251 | orchestrator | 2025-02-04 10:11:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:18.946778 | orchestrator | 2025-02-04 10:11:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:21.982509 | orchestrator | 2025-02-04 10:11:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:21.982766 | orchestrator | 2025-02-04 10:11:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:21.982919 | orchestrator | 2025-02-04 10:11:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:25.029598 | orchestrator | 2025-02-04 10:11:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:25.029741 | orchestrator | 2025-02-04 10:11:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:25.030140 | orchestrator | 2025-02-04 10:11:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:28.067723 | orchestrator | 2025-02-04 10:11:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:28.067868 | orchestrator | 2025-02-04 10:11:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:28.068686 | orchestrator | 2025-02-04 10:11:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:31.111173 | orchestrator | 2025-02-04 10:11:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:31.111376 | orchestrator | 2025-02-04 10:11:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:34.146255 | orchestrator | 2025-02-04 10:11:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:34.146432 | orchestrator | 2025-02-04 10:11:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:34.146473 | orchestrator | 2025-02-04 10:11:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:34.146832 | orchestrator | 2025-02-04 10:11:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:37.187130 | orchestrator | 2025-02-04 10:11:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:37.187255 | orchestrator | 2025-02-04 10:11:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:40.229693 | orchestrator | 2025-02-04 10:11:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:40.229776 | orchestrator | 2025-02-04 10:11:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:40.229833 | orchestrator | 2025-02-04 10:11:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:40.229883 | orchestrator | 2025-02-04 10:11:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:43.277771 | orchestrator | 2025-02-04 10:11:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:43.277910 | orchestrator | 2025-02-04 10:11:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:43.279091 | orchestrator | 2025-02-04 10:11:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:46.321895 | orchestrator | 2025-02-04 10:11:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:46.322098 | orchestrator | 2025-02-04 10:11:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:46.323547 | orchestrator | 2025-02-04 10:11:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:49.354212 | orchestrator | 2025-02-04 10:11:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:49.354381 | orchestrator | 2025-02-04 10:11:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:49.356128 | orchestrator | 2025-02-04 10:11:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:52.389378 | orchestrator | 2025-02-04 10:11:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:52.389550 | orchestrator | 2025-02-04 10:11:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:52.390947 | orchestrator | 2025-02-04 10:11:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:55.437426 | orchestrator | 2025-02-04 10:11:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:55.437555 | orchestrator | 2025-02-04 10:11:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:58.474236 | orchestrator | 2025-02-04 10:11:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:11:58.474380 | orchestrator | 2025-02-04 10:11:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:11:58.474403 | orchestrator | 2025-02-04 10:11:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:11:58.477564 | orchestrator | 2025-02-04 10:11:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:01.514902 | orchestrator | 2025-02-04 10:11:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:01.515877 | orchestrator | 2025-02-04 10:12:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:01.516903 | orchestrator | 2025-02-04 10:12:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:04.558603 | orchestrator | 2025-02-04 10:12:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:04.558721 | orchestrator | 2025-02-04 10:12:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:04.560851 | orchestrator | 2025-02-04 10:12:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:07.597760 | orchestrator | 2025-02-04 10:12:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:07.597905 | orchestrator | 2025-02-04 10:12:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:07.598621 | orchestrator | 2025-02-04 10:12:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:10.636546 | orchestrator | 2025-02-04 10:12:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:10.636729 | orchestrator | 2025-02-04 10:12:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:10.638523 | orchestrator | 2025-02-04 10:12:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:13.680571 | orchestrator | 2025-02-04 10:12:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:13.680744 | orchestrator | 2025-02-04 10:12:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:13.682227 | orchestrator | 2025-02-04 10:12:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:16.722709 | orchestrator | 2025-02-04 10:12:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:16.722818 | orchestrator | 2025-02-04 10:12:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:16.723342 | orchestrator | 2025-02-04 10:12:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:19.758898 | orchestrator | 2025-02-04 10:12:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:19.758995 | orchestrator | 2025-02-04 10:12:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:19.761628 | orchestrator | 2025-02-04 10:12:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:22.796551 | orchestrator | 2025-02-04 10:12:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:22.796724 | orchestrator | 2025-02-04 10:12:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:22.797099 | orchestrator | 2025-02-04 10:12:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:25.825768 | orchestrator | 2025-02-04 10:12:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:25.825909 | orchestrator | 2025-02-04 10:12:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:25.826265 | orchestrator | 2025-02-04 10:12:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:28.872084 | orchestrator | 2025-02-04 10:12:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:28.872216 | orchestrator | 2025-02-04 10:12:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:28.872513 | orchestrator | 2025-02-04 10:12:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:31.917982 | orchestrator | 2025-02-04 10:12:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:31.918164 | orchestrator | 2025-02-04 10:12:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:31.918525 | orchestrator | 2025-02-04 10:12:31 | INFO  | Task 5f4d229a-7f4c-4169-b5ee-c322484e820c is in state STARTED 2025-02-04 10:12:31.920291 | orchestrator | 2025-02-04 10:12:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:31.920390 | orchestrator | 2025-02-04 10:12:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:34.976731 | orchestrator | 2025-02-04 10:12:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:34.976867 | orchestrator | 2025-02-04 10:12:34 | INFO  | Task 5f4d229a-7f4c-4169-b5ee-c322484e820c is in state STARTED 2025-02-04 10:12:34.978135 | orchestrator | 2025-02-04 10:12:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:38.046586 | orchestrator | 2025-02-04 10:12:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:38.046764 | orchestrator | 2025-02-04 10:12:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:38.046834 | orchestrator | 2025-02-04 10:12:38 | INFO  | Task 5f4d229a-7f4c-4169-b5ee-c322484e820c is in state STARTED 2025-02-04 10:12:38.049229 | orchestrator | 2025-02-04 10:12:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:41.104956 | orchestrator | 2025-02-04 10:12:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:41.105091 | orchestrator | 2025-02-04 10:12:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:41.105803 | orchestrator | 2025-02-04 10:12:41 | INFO  | Task 5f4d229a-7f4c-4169-b5ee-c322484e820c is in state STARTED 2025-02-04 10:12:41.105879 | orchestrator | 2025-02-04 10:12:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:41.105957 | orchestrator | 2025-02-04 10:12:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:44.148604 | orchestrator | 2025-02-04 10:12:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:44.149373 | orchestrator | 2025-02-04 10:12:44 | INFO  | Task 5f4d229a-7f4c-4169-b5ee-c322484e820c is in state SUCCESS 2025-02-04 10:12:44.149420 | orchestrator | 2025-02-04 10:12:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:44.149564 | orchestrator | 2025-02-04 10:12:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:47.191014 | orchestrator | 2025-02-04 10:12:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:47.191949 | orchestrator | 2025-02-04 10:12:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:50.239652 | orchestrator | 2025-02-04 10:12:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:50.239789 | orchestrator | 2025-02-04 10:12:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:53.275119 | orchestrator | 2025-02-04 10:12:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:53.275229 | orchestrator | 2025-02-04 10:12:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:53.275261 | orchestrator | 2025-02-04 10:12:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:53.276365 | orchestrator | 2025-02-04 10:12:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:56.317366 | orchestrator | 2025-02-04 10:12:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:56.317462 | orchestrator | 2025-02-04 10:12:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:56.317725 | orchestrator | 2025-02-04 10:12:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:12:59.354366 | orchestrator | 2025-02-04 10:12:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:12:59.354499 | orchestrator | 2025-02-04 10:12:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:12:59.354932 | orchestrator | 2025-02-04 10:12:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:02.386792 | orchestrator | 2025-02-04 10:12:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:02.387005 | orchestrator | 2025-02-04 10:13:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:05.421930 | orchestrator | 2025-02-04 10:13:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:05.422158 | orchestrator | 2025-02-04 10:13:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:05.422204 | orchestrator | 2025-02-04 10:13:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:05.424641 | orchestrator | 2025-02-04 10:13:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:08.463098 | orchestrator | 2025-02-04 10:13:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:08.463214 | orchestrator | 2025-02-04 10:13:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:08.464361 | orchestrator | 2025-02-04 10:13:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:11.501158 | orchestrator | 2025-02-04 10:13:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:11.501364 | orchestrator | 2025-02-04 10:13:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:11.501918 | orchestrator | 2025-02-04 10:13:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:14.545128 | orchestrator | 2025-02-04 10:13:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:14.545239 | orchestrator | 2025-02-04 10:13:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:14.546114 | orchestrator | 2025-02-04 10:13:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:17.574842 | orchestrator | 2025-02-04 10:13:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:17.574917 | orchestrator | 2025-02-04 10:13:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:20.611773 | orchestrator | 2025-02-04 10:13:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:20.611860 | orchestrator | 2025-02-04 10:13:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:20.611879 | orchestrator | 2025-02-04 10:13:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:20.612770 | orchestrator | 2025-02-04 10:13:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:23.649744 | orchestrator | 2025-02-04 10:13:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:23.649820 | orchestrator | 2025-02-04 10:13:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:23.650255 | orchestrator | 2025-02-04 10:13:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:26.688107 | orchestrator | 2025-02-04 10:13:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:26.688220 | orchestrator | 2025-02-04 10:13:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:26.688567 | orchestrator | 2025-02-04 10:13:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:29.727039 | orchestrator | 2025-02-04 10:13:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:29.727202 | orchestrator | 2025-02-04 10:13:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:29.727939 | orchestrator | 2025-02-04 10:13:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:32.758968 | orchestrator | 2025-02-04 10:13:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:32.759108 | orchestrator | 2025-02-04 10:13:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:32.759215 | orchestrator | 2025-02-04 10:13:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:35.796499 | orchestrator | 2025-02-04 10:13:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:35.796681 | orchestrator | 2025-02-04 10:13:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:35.797088 | orchestrator | 2025-02-04 10:13:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:38.827859 | orchestrator | 2025-02-04 10:13:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:38.827981 | orchestrator | 2025-02-04 10:13:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:38.828518 | orchestrator | 2025-02-04 10:13:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:41.863414 | orchestrator | 2025-02-04 10:13:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:41.863632 | orchestrator | 2025-02-04 10:13:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:41.865070 | orchestrator | 2025-02-04 10:13:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:44.906507 | orchestrator | 2025-02-04 10:13:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:44.906679 | orchestrator | 2025-02-04 10:13:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:44.907445 | orchestrator | 2025-02-04 10:13:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:47.952144 | orchestrator | 2025-02-04 10:13:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:47.952336 | orchestrator | 2025-02-04 10:13:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:47.953158 | orchestrator | 2025-02-04 10:13:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:50.985842 | orchestrator | 2025-02-04 10:13:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:50.985982 | orchestrator | 2025-02-04 10:13:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:50.986192 | orchestrator | 2025-02-04 10:13:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:54.026088 | orchestrator | 2025-02-04 10:13:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:54.026285 | orchestrator | 2025-02-04 10:13:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:54.028655 | orchestrator | 2025-02-04 10:13:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:57.075065 | orchestrator | 2025-02-04 10:13:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:13:57.075217 | orchestrator | 2025-02-04 10:13:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:13:57.075327 | orchestrator | 2025-02-04 10:13:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:13:57.075348 | orchestrator | 2025-02-04 10:13:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:00.111196 | orchestrator | 2025-02-04 10:14:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:00.112968 | orchestrator | 2025-02-04 10:14:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:03.150705 | orchestrator | 2025-02-04 10:14:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:03.150848 | orchestrator | 2025-02-04 10:14:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:03.151458 | orchestrator | 2025-02-04 10:14:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:06.196325 | orchestrator | 2025-02-04 10:14:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:06.196508 | orchestrator | 2025-02-04 10:14:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:06.196707 | orchestrator | 2025-02-04 10:14:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:09.237889 | orchestrator | 2025-02-04 10:14:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:09.238094 | orchestrator | 2025-02-04 10:14:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:09.238190 | orchestrator | 2025-02-04 10:14:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:12.288466 | orchestrator | 2025-02-04 10:14:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:12.288597 | orchestrator | 2025-02-04 10:14:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:12.289202 | orchestrator | 2025-02-04 10:14:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:15.325204 | orchestrator | 2025-02-04 10:14:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:15.325386 | orchestrator | 2025-02-04 10:14:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:18.355410 | orchestrator | 2025-02-04 10:14:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:18.355545 | orchestrator | 2025-02-04 10:14:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:18.355584 | orchestrator | 2025-02-04 10:14:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:18.356051 | orchestrator | 2025-02-04 10:14:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:21.394863 | orchestrator | 2025-02-04 10:14:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:21.395024 | orchestrator | 2025-02-04 10:14:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:21.395472 | orchestrator | 2025-02-04 10:14:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:24.450211 | orchestrator | 2025-02-04 10:14:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:24.450419 | orchestrator | 2025-02-04 10:14:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:24.451443 | orchestrator | 2025-02-04 10:14:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:27.493918 | orchestrator | 2025-02-04 10:14:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:27.494120 | orchestrator | 2025-02-04 10:14:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:27.494382 | orchestrator | 2025-02-04 10:14:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:30.540609 | orchestrator | 2025-02-04 10:14:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:30.540700 | orchestrator | 2025-02-04 10:14:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:30.540742 | orchestrator | 2025-02-04 10:14:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:33.578806 | orchestrator | 2025-02-04 10:14:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:33.578958 | orchestrator | 2025-02-04 10:14:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:33.579474 | orchestrator | 2025-02-04 10:14:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:36.617710 | orchestrator | 2025-02-04 10:14:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:36.617810 | orchestrator | 2025-02-04 10:14:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:36.617855 | orchestrator | 2025-02-04 10:14:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:39.654238 | orchestrator | 2025-02-04 10:14:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:39.654403 | orchestrator | 2025-02-04 10:14:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:39.655806 | orchestrator | 2025-02-04 10:14:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:42.702372 | orchestrator | 2025-02-04 10:14:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:42.702493 | orchestrator | 2025-02-04 10:14:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:45.744395 | orchestrator | 2025-02-04 10:14:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:45.744522 | orchestrator | 2025-02-04 10:14:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:45.744607 | orchestrator | 2025-02-04 10:14:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:45.744703 | orchestrator | 2025-02-04 10:14:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:48.782578 | orchestrator | 2025-02-04 10:14:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:48.782721 | orchestrator | 2025-02-04 10:14:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:51.819505 | orchestrator | 2025-02-04 10:14:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:51.819634 | orchestrator | 2025-02-04 10:14:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:51.819671 | orchestrator | 2025-02-04 10:14:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:54.850799 | orchestrator | 2025-02-04 10:14:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:54.850923 | orchestrator | 2025-02-04 10:14:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:54.850963 | orchestrator | 2025-02-04 10:14:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:54.852739 | orchestrator | 2025-02-04 10:14:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:14:57.891557 | orchestrator | 2025-02-04 10:14:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:14:57.891710 | orchestrator | 2025-02-04 10:14:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:14:57.891822 | orchestrator | 2025-02-04 10:14:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:00.933667 | orchestrator | 2025-02-04 10:14:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:00.933791 | orchestrator | 2025-02-04 10:15:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:00.935088 | orchestrator | 2025-02-04 10:15:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:03.985044 | orchestrator | 2025-02-04 10:15:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:03.985184 | orchestrator | 2025-02-04 10:15:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:03.986130 | orchestrator | 2025-02-04 10:15:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:07.041424 | orchestrator | 2025-02-04 10:15:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:07.041547 | orchestrator | 2025-02-04 10:15:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:10.077703 | orchestrator | 2025-02-04 10:15:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:10.077898 | orchestrator | 2025-02-04 10:15:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:10.077941 | orchestrator | 2025-02-04 10:15:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:10.078099 | orchestrator | 2025-02-04 10:15:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:10.078443 | orchestrator | 2025-02-04 10:15:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:13.115376 | orchestrator | 2025-02-04 10:15:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:16.152150 | orchestrator | 2025-02-04 10:15:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:16.152337 | orchestrator | 2025-02-04 10:15:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:16.152379 | orchestrator | 2025-02-04 10:15:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:16.152465 | orchestrator | 2025-02-04 10:15:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:19.184646 | orchestrator | 2025-02-04 10:15:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:19.184794 | orchestrator | 2025-02-04 10:15:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:19.184934 | orchestrator | 2025-02-04 10:15:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:22.225297 | orchestrator | 2025-02-04 10:15:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:22.225437 | orchestrator | 2025-02-04 10:15:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:22.226684 | orchestrator | 2025-02-04 10:15:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:25.266874 | orchestrator | 2025-02-04 10:15:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:25.267015 | orchestrator | 2025-02-04 10:15:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:25.268044 | orchestrator | 2025-02-04 10:15:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:28.303676 | orchestrator | 2025-02-04 10:15:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:28.303825 | orchestrator | 2025-02-04 10:15:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:31.333412 | orchestrator | 2025-02-04 10:15:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:31.333519 | orchestrator | 2025-02-04 10:15:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:31.333547 | orchestrator | 2025-02-04 10:15:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:31.334481 | orchestrator | 2025-02-04 10:15:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:34.383892 | orchestrator | 2025-02-04 10:15:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:34.384036 | orchestrator | 2025-02-04 10:15:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:34.384123 | orchestrator | 2025-02-04 10:15:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:37.448591 | orchestrator | 2025-02-04 10:15:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:37.448739 | orchestrator | 2025-02-04 10:15:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:40.485761 | orchestrator | 2025-02-04 10:15:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:40.485888 | orchestrator | 2025-02-04 10:15:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:40.485924 | orchestrator | 2025-02-04 10:15:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:40.486606 | orchestrator | 2025-02-04 10:15:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:43.522981 | orchestrator | 2025-02-04 10:15:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:43.523041 | orchestrator | 2025-02-04 10:15:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:43.524077 | orchestrator | 2025-02-04 10:15:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:46.559680 | orchestrator | 2025-02-04 10:15:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:46.559833 | orchestrator | 2025-02-04 10:15:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:46.560154 | orchestrator | 2025-02-04 10:15:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:49.597150 | orchestrator | 2025-02-04 10:15:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:49.597320 | orchestrator | 2025-02-04 10:15:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:49.598481 | orchestrator | 2025-02-04 10:15:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:52.632217 | orchestrator | 2025-02-04 10:15:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:52.632631 | orchestrator | 2025-02-04 10:15:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:55.667522 | orchestrator | 2025-02-04 10:15:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:55.667646 | orchestrator | 2025-02-04 10:15:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:55.667685 | orchestrator | 2025-02-04 10:15:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:58.709303 | orchestrator | 2025-02-04 10:15:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:15:58.709461 | orchestrator | 2025-02-04 10:15:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:15:58.709504 | orchestrator | 2025-02-04 10:15:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:15:58.711346 | orchestrator | 2025-02-04 10:15:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:01.748051 | orchestrator | 2025-02-04 10:15:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:01.748188 | orchestrator | 2025-02-04 10:16:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:01.748326 | orchestrator | 2025-02-04 10:16:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:04.785146 | orchestrator | 2025-02-04 10:16:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:04.785410 | orchestrator | 2025-02-04 10:16:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:04.785569 | orchestrator | 2025-02-04 10:16:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:07.809418 | orchestrator | 2025-02-04 10:16:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:07.809557 | orchestrator | 2025-02-04 10:16:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:07.809648 | orchestrator | 2025-02-04 10:16:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:10.857503 | orchestrator | 2025-02-04 10:16:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:10.857648 | orchestrator | 2025-02-04 10:16:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:13.902217 | orchestrator | 2025-02-04 10:16:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:13.902351 | orchestrator | 2025-02-04 10:16:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:13.902377 | orchestrator | 2025-02-04 10:16:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:13.903705 | orchestrator | 2025-02-04 10:16:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:16.941478 | orchestrator | 2025-02-04 10:16:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:16.941637 | orchestrator | 2025-02-04 10:16:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:16.941718 | orchestrator | 2025-02-04 10:16:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:19.984682 | orchestrator | 2025-02-04 10:16:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:19.984832 | orchestrator | 2025-02-04 10:16:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:19.984917 | orchestrator | 2025-02-04 10:16:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:23.038336 | orchestrator | 2025-02-04 10:16:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:23.038434 | orchestrator | 2025-02-04 10:16:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:23.038815 | orchestrator | 2025-02-04 10:16:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:26.080677 | orchestrator | 2025-02-04 10:16:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:26.080817 | orchestrator | 2025-02-04 10:16:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:26.081360 | orchestrator | 2025-02-04 10:16:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:29.122253 | orchestrator | 2025-02-04 10:16:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:29.122399 | orchestrator | 2025-02-04 10:16:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:29.122695 | orchestrator | 2025-02-04 10:16:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:32.161735 | orchestrator | 2025-02-04 10:16:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:32.161877 | orchestrator | 2025-02-04 10:16:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:32.162876 | orchestrator | 2025-02-04 10:16:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:32.162996 | orchestrator | 2025-02-04 10:16:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:35.205712 | orchestrator | 2025-02-04 10:16:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:35.206660 | orchestrator | 2025-02-04 10:16:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:35.206804 | orchestrator | 2025-02-04 10:16:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:38.245641 | orchestrator | 2025-02-04 10:16:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:38.245809 | orchestrator | 2025-02-04 10:16:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:41.287853 | orchestrator | 2025-02-04 10:16:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:41.288045 | orchestrator | 2025-02-04 10:16:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:41.289082 | orchestrator | 2025-02-04 10:16:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:44.325374 | orchestrator | 2025-02-04 10:16:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:44.325516 | orchestrator | 2025-02-04 10:16:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:44.327529 | orchestrator | 2025-02-04 10:16:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:47.361136 | orchestrator | 2025-02-04 10:16:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:47.361343 | orchestrator | 2025-02-04 10:16:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:47.362845 | orchestrator | 2025-02-04 10:16:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:47.363295 | orchestrator | 2025-02-04 10:16:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:50.410529 | orchestrator | 2025-02-04 10:16:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:50.410767 | orchestrator | 2025-02-04 10:16:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:53.449072 | orchestrator | 2025-02-04 10:16:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:53.449206 | orchestrator | 2025-02-04 10:16:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:53.450679 | orchestrator | 2025-02-04 10:16:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:56.488666 | orchestrator | 2025-02-04 10:16:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:56.488847 | orchestrator | 2025-02-04 10:16:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:59.549645 | orchestrator | 2025-02-04 10:16:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:59.549745 | orchestrator | 2025-02-04 10:16:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:16:59.549767 | orchestrator | 2025-02-04 10:16:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:16:59.551660 | orchestrator | 2025-02-04 10:16:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:16:59.551694 | orchestrator | 2025-02-04 10:16:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:02.590547 | orchestrator | 2025-02-04 10:17:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:05.634833 | orchestrator | 2025-02-04 10:17:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:05.634962 | orchestrator | 2025-02-04 10:17:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:05.635001 | orchestrator | 2025-02-04 10:17:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:08.677676 | orchestrator | 2025-02-04 10:17:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:08.677805 | orchestrator | 2025-02-04 10:17:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:08.677845 | orchestrator | 2025-02-04 10:17:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:08.678647 | orchestrator | 2025-02-04 10:17:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:11.717899 | orchestrator | 2025-02-04 10:17:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:11.718154 | orchestrator | 2025-02-04 10:17:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:11.718606 | orchestrator | 2025-02-04 10:17:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:14.757757 | orchestrator | 2025-02-04 10:17:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:14.757866 | orchestrator | 2025-02-04 10:17:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:14.759918 | orchestrator | 2025-02-04 10:17:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:17.800024 | orchestrator | 2025-02-04 10:17:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:17.800147 | orchestrator | 2025-02-04 10:17:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:17.800777 | orchestrator | 2025-02-04 10:17:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:20.834727 | orchestrator | 2025-02-04 10:17:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:20.834892 | orchestrator | 2025-02-04 10:17:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:23.875290 | orchestrator | 2025-02-04 10:17:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:23.875379 | orchestrator | 2025-02-04 10:17:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:23.875398 | orchestrator | 2025-02-04 10:17:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:23.876014 | orchestrator | 2025-02-04 10:17:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:23.876111 | orchestrator | 2025-02-04 10:17:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:26.926001 | orchestrator | 2025-02-04 10:17:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:26.927185 | orchestrator | 2025-02-04 10:17:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:26.927296 | orchestrator | 2025-02-04 10:17:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:29.983662 | orchestrator | 2025-02-04 10:17:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:29.984198 | orchestrator | 2025-02-04 10:17:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:33.037117 | orchestrator | 2025-02-04 10:17:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:33.037300 | orchestrator | 2025-02-04 10:17:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:33.037623 | orchestrator | 2025-02-04 10:17:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:36.077338 | orchestrator | 2025-02-04 10:17:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:36.077435 | orchestrator | 2025-02-04 10:17:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:36.077746 | orchestrator | 2025-02-04 10:17:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:39.118638 | orchestrator | 2025-02-04 10:17:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:39.118781 | orchestrator | 2025-02-04 10:17:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:39.119390 | orchestrator | 2025-02-04 10:17:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:42.169311 | orchestrator | 2025-02-04 10:17:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:42.169453 | orchestrator | 2025-02-04 10:17:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:42.170899 | orchestrator | 2025-02-04 10:17:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:45.222646 | orchestrator | 2025-02-04 10:17:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:45.222784 | orchestrator | 2025-02-04 10:17:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:45.222991 | orchestrator | 2025-02-04 10:17:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:45.223365 | orchestrator | 2025-02-04 10:17:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:48.277759 | orchestrator | 2025-02-04 10:17:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:48.279497 | orchestrator | 2025-02-04 10:17:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:48.279745 | orchestrator | 2025-02-04 10:17:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:51.321792 | orchestrator | 2025-02-04 10:17:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:51.322547 | orchestrator | 2025-02-04 10:17:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:54.354965 | orchestrator | 2025-02-04 10:17:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:54.355106 | orchestrator | 2025-02-04 10:17:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:54.355243 | orchestrator | 2025-02-04 10:17:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:17:57.392105 | orchestrator | 2025-02-04 10:17:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:17:57.392298 | orchestrator | 2025-02-04 10:17:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:17:57.392727 | orchestrator | 2025-02-04 10:17:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:00.432717 | orchestrator | 2025-02-04 10:17:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:00.432907 | orchestrator | 2025-02-04 10:18:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:03.481499 | orchestrator | 2025-02-04 10:18:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:03.481619 | orchestrator | 2025-02-04 10:18:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:03.481661 | orchestrator | 2025-02-04 10:18:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:03.481747 | orchestrator | 2025-02-04 10:18:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:06.537647 | orchestrator | 2025-02-04 10:18:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:06.537792 | orchestrator | 2025-02-04 10:18:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:06.537888 | orchestrator | 2025-02-04 10:18:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:09.585758 | orchestrator | 2025-02-04 10:18:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:09.585970 | orchestrator | 2025-02-04 10:18:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:09.588631 | orchestrator | 2025-02-04 10:18:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:12.634587 | orchestrator | 2025-02-04 10:18:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:12.634761 | orchestrator | 2025-02-04 10:18:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:12.634854 | orchestrator | 2025-02-04 10:18:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:15.685328 | orchestrator | 2025-02-04 10:18:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:15.685488 | orchestrator | 2025-02-04 10:18:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:15.686241 | orchestrator | 2025-02-04 10:18:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:18.739761 | orchestrator | 2025-02-04 10:18:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:18.739910 | orchestrator | 2025-02-04 10:18:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:18.740105 | orchestrator | 2025-02-04 10:18:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:18.740412 | orchestrator | 2025-02-04 10:18:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:21.791093 | orchestrator | 2025-02-04 10:18:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:21.794128 | orchestrator | 2025-02-04 10:18:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:21.795579 | orchestrator | 2025-02-04 10:18:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:24.844175 | orchestrator | 2025-02-04 10:18:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:24.844900 | orchestrator | 2025-02-04 10:18:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:27.892990 | orchestrator | 2025-02-04 10:18:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:27.893164 | orchestrator | 2025-02-04 10:18:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:27.893622 | orchestrator | 2025-02-04 10:18:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:30.942485 | orchestrator | 2025-02-04 10:18:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:30.942637 | orchestrator | 2025-02-04 10:18:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:30.943423 | orchestrator | 2025-02-04 10:18:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:33.980235 | orchestrator | 2025-02-04 10:18:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:33.980339 | orchestrator | 2025-02-04 10:18:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:33.981229 | orchestrator | 2025-02-04 10:18:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:37.023653 | orchestrator | 2025-02-04 10:18:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:37.023749 | orchestrator | 2025-02-04 10:18:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:40.062173 | orchestrator | 2025-02-04 10:18:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:40.062488 | orchestrator | 2025-02-04 10:18:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:40.062555 | orchestrator | 2025-02-04 10:18:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:43.098839 | orchestrator | 2025-02-04 10:18:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:43.098948 | orchestrator | 2025-02-04 10:18:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:43.098977 | orchestrator | 2025-02-04 10:18:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:43.099745 | orchestrator | 2025-02-04 10:18:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:46.139416 | orchestrator | 2025-02-04 10:18:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:46.139565 | orchestrator | 2025-02-04 10:18:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:46.139639 | orchestrator | 2025-02-04 10:18:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:49.185870 | orchestrator | 2025-02-04 10:18:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:49.185973 | orchestrator | 2025-02-04 10:18:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:49.187929 | orchestrator | 2025-02-04 10:18:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:52.237587 | orchestrator | 2025-02-04 10:18:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:52.237790 | orchestrator | 2025-02-04 10:18:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:52.238364 | orchestrator | 2025-02-04 10:18:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:55.286416 | orchestrator | 2025-02-04 10:18:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:55.286600 | orchestrator | 2025-02-04 10:18:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:55.287717 | orchestrator | 2025-02-04 10:18:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:18:58.337774 | orchestrator | 2025-02-04 10:18:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:18:58.337910 | orchestrator | 2025-02-04 10:18:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:18:58.339023 | orchestrator | 2025-02-04 10:18:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:01.378414 | orchestrator | 2025-02-04 10:18:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:01.378629 | orchestrator | 2025-02-04 10:19:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:01.378739 | orchestrator | 2025-02-04 10:19:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:04.429450 | orchestrator | 2025-02-04 10:19:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:04.429581 | orchestrator | 2025-02-04 10:19:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:04.430218 | orchestrator | 2025-02-04 10:19:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:07.476169 | orchestrator | 2025-02-04 10:19:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:07.476362 | orchestrator | 2025-02-04 10:19:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:07.476490 | orchestrator | 2025-02-04 10:19:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:10.514418 | orchestrator | 2025-02-04 10:19:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:10.514557 | orchestrator | 2025-02-04 10:19:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:13.555349 | orchestrator | 2025-02-04 10:19:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:13.555476 | orchestrator | 2025-02-04 10:19:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:13.555504 | orchestrator | 2025-02-04 10:19:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:13.555563 | orchestrator | 2025-02-04 10:19:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:16.613390 | orchestrator | 2025-02-04 10:19:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:16.613496 | orchestrator | 2025-02-04 10:19:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:16.614445 | orchestrator | 2025-02-04 10:19:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:19.661846 | orchestrator | 2025-02-04 10:19:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:19.662006 | orchestrator | 2025-02-04 10:19:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:19.663543 | orchestrator | 2025-02-04 10:19:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:22.713111 | orchestrator | 2025-02-04 10:19:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:22.713338 | orchestrator | 2025-02-04 10:19:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:22.714967 | orchestrator | 2025-02-04 10:19:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:22.715614 | orchestrator | 2025-02-04 10:19:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:25.758662 | orchestrator | 2025-02-04 10:19:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:25.759319 | orchestrator | 2025-02-04 10:19:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:28.805855 | orchestrator | 2025-02-04 10:19:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:28.806141 | orchestrator | 2025-02-04 10:19:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:28.806323 | orchestrator | 2025-02-04 10:19:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:31.854694 | orchestrator | 2025-02-04 10:19:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:31.854837 | orchestrator | 2025-02-04 10:19:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:34.909922 | orchestrator | 2025-02-04 10:19:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:35.016762 | orchestrator | 2025-02-04 10:19:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:35.016891 | orchestrator | 2025-02-04 10:19:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:37.958718 | orchestrator | 2025-02-04 10:19:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:37.958895 | orchestrator | 2025-02-04 10:19:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:37.958979 | orchestrator | 2025-02-04 10:19:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:37.959071 | orchestrator | 2025-02-04 10:19:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:41.005918 | orchestrator | 2025-02-04 10:19:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:41.009092 | orchestrator | 2025-02-04 10:19:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:44.054100 | orchestrator | 2025-02-04 10:19:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:44.054261 | orchestrator | 2025-02-04 10:19:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:44.054301 | orchestrator | 2025-02-04 10:19:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:47.090377 | orchestrator | 2025-02-04 10:19:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:47.090500 | orchestrator | 2025-02-04 10:19:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:47.090536 | orchestrator | 2025-02-04 10:19:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:47.090760 | orchestrator | 2025-02-04 10:19:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:50.130729 | orchestrator | 2025-02-04 10:19:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:50.130870 | orchestrator | 2025-02-04 10:19:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:50.130918 | orchestrator | 2025-02-04 10:19:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:53.167534 | orchestrator | 2025-02-04 10:19:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:53.169320 | orchestrator | 2025-02-04 10:19:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:56.214316 | orchestrator | 2025-02-04 10:19:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:56.214395 | orchestrator | 2025-02-04 10:19:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:56.214414 | orchestrator | 2025-02-04 10:19:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:56.215394 | orchestrator | 2025-02-04 10:19:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:19:59.261237 | orchestrator | 2025-02-04 10:19:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:19:59.261381 | orchestrator | 2025-02-04 10:19:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:19:59.263358 | orchestrator | 2025-02-04 10:19:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:02.309180 | orchestrator | 2025-02-04 10:19:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:02.309357 | orchestrator | 2025-02-04 10:20:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:05.352740 | orchestrator | 2025-02-04 10:20:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:05.716408 | orchestrator | 2025-02-04 10:20:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:05.716509 | orchestrator | 2025-02-04 10:20:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:08.392042 | orchestrator | 2025-02-04 10:20:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:08.392282 | orchestrator | 2025-02-04 10:20:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:08.392329 | orchestrator | 2025-02-04 10:20:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:08.392435 | orchestrator | 2025-02-04 10:20:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:11.461429 | orchestrator | 2025-02-04 10:20:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:11.461546 | orchestrator | 2025-02-04 10:20:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:11.461803 | orchestrator | 2025-02-04 10:20:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:14.515617 | orchestrator | 2025-02-04 10:20:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:14.515746 | orchestrator | 2025-02-04 10:20:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:14.517439 | orchestrator | 2025-02-04 10:20:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:17.563264 | orchestrator | 2025-02-04 10:20:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:17.563324 | orchestrator | 2025-02-04 10:20:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:17.564157 | orchestrator | 2025-02-04 10:20:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:20.606551 | orchestrator | 2025-02-04 10:20:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:20.606658 | orchestrator | 2025-02-04 10:20:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:20.606701 | orchestrator | 2025-02-04 10:20:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:23.648149 | orchestrator | 2025-02-04 10:20:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:23.648335 | orchestrator | 2025-02-04 10:20:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:23.648764 | orchestrator | 2025-02-04 10:20:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:26.685594 | orchestrator | 2025-02-04 10:20:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:26.685680 | orchestrator | 2025-02-04 10:20:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:26.686593 | orchestrator | 2025-02-04 10:20:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:29.731998 | orchestrator | 2025-02-04 10:20:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:29.732112 | orchestrator | 2025-02-04 10:20:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:32.783314 | orchestrator | 2025-02-04 10:20:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:32.783406 | orchestrator | 2025-02-04 10:20:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:32.783426 | orchestrator | 2025-02-04 10:20:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:32.783727 | orchestrator | 2025-02-04 10:20:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:35.835054 | orchestrator | 2025-02-04 10:20:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:35.835162 | orchestrator | 2025-02-04 10:20:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:35.836538 | orchestrator | 2025-02-04 10:20:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:38.887785 | orchestrator | 2025-02-04 10:20:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:38.887895 | orchestrator | 2025-02-04 10:20:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:41.938717 | orchestrator | 2025-02-04 10:20:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:41.938847 | orchestrator | 2025-02-04 10:20:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:41.938890 | orchestrator | 2025-02-04 10:20:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:41.939518 | orchestrator | 2025-02-04 10:20:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:44.991325 | orchestrator | 2025-02-04 10:20:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:44.991432 | orchestrator | 2025-02-04 10:20:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:44.993349 | orchestrator | 2025-02-04 10:20:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:48.044507 | orchestrator | 2025-02-04 10:20:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:48.044650 | orchestrator | 2025-02-04 10:20:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:51.080425 | orchestrator | 2025-02-04 10:20:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:51.080533 | orchestrator | 2025-02-04 10:20:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:51.080567 | orchestrator | 2025-02-04 10:20:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:54.117908 | orchestrator | 2025-02-04 10:20:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:54.118065 | orchestrator | 2025-02-04 10:20:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:54.118100 | orchestrator | 2025-02-04 10:20:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:20:57.153633 | orchestrator | 2025-02-04 10:20:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:20:57.153759 | orchestrator | 2025-02-04 10:20:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:20:57.153796 | orchestrator | 2025-02-04 10:20:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:00.190088 | orchestrator | 2025-02-04 10:20:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:00.190265 | orchestrator | 2025-02-04 10:20:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:00.190312 | orchestrator | 2025-02-04 10:21:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:03.242169 | orchestrator | 2025-02-04 10:21:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:03.242371 | orchestrator | 2025-02-04 10:21:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:03.242409 | orchestrator | 2025-02-04 10:21:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:03.243449 | orchestrator | 2025-02-04 10:21:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:06.282620 | orchestrator | 2025-02-04 10:21:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:06.282749 | orchestrator | 2025-02-04 10:21:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:06.284251 | orchestrator | 2025-02-04 10:21:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:09.334999 | orchestrator | 2025-02-04 10:21:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:09.335117 | orchestrator | 2025-02-04 10:21:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:09.337540 | orchestrator | 2025-02-04 10:21:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:12.382307 | orchestrator | 2025-02-04 10:21:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:12.382550 | orchestrator | 2025-02-04 10:21:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:12.382765 | orchestrator | 2025-02-04 10:21:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:15.425672 | orchestrator | 2025-02-04 10:21:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:15.425820 | orchestrator | 2025-02-04 10:21:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:18.470945 | orchestrator | 2025-02-04 10:21:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:18.471030 | orchestrator | 2025-02-04 10:21:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:18.471052 | orchestrator | 2025-02-04 10:21:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:18.471618 | orchestrator | 2025-02-04 10:21:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:21.512793 | orchestrator | 2025-02-04 10:21:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:21.512969 | orchestrator | 2025-02-04 10:21:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:21.513774 | orchestrator | 2025-02-04 10:21:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:24.562087 | orchestrator | 2025-02-04 10:21:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:24.562239 | orchestrator | 2025-02-04 10:21:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:24.564405 | orchestrator | 2025-02-04 10:21:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:27.606388 | orchestrator | 2025-02-04 10:21:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:27.606498 | orchestrator | 2025-02-04 10:21:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:27.606536 | orchestrator | 2025-02-04 10:21:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:30.650380 | orchestrator | 2025-02-04 10:21:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:30.650542 | orchestrator | 2025-02-04 10:21:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:30.651824 | orchestrator | 2025-02-04 10:21:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:33.689812 | orchestrator | 2025-02-04 10:21:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:33.689965 | orchestrator | 2025-02-04 10:21:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:33.690163 | orchestrator | 2025-02-04 10:21:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:36.725526 | orchestrator | 2025-02-04 10:21:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:36.725667 | orchestrator | 2025-02-04 10:21:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:39.764380 | orchestrator | 2025-02-04 10:21:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:39.764502 | orchestrator | 2025-02-04 10:21:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:39.764540 | orchestrator | 2025-02-04 10:21:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:42.797959 | orchestrator | 2025-02-04 10:21:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:42.798146 | orchestrator | 2025-02-04 10:21:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:42.798220 | orchestrator | 2025-02-04 10:21:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:45.834342 | orchestrator | 2025-02-04 10:21:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:45.834441 | orchestrator | 2025-02-04 10:21:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:45.834462 | orchestrator | 2025-02-04 10:21:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:48.868044 | orchestrator | 2025-02-04 10:21:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:48.868200 | orchestrator | 2025-02-04 10:21:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:48.868241 | orchestrator | 2025-02-04 10:21:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:48.868920 | orchestrator | 2025-02-04 10:21:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:51.904068 | orchestrator | 2025-02-04 10:21:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:51.904242 | orchestrator | 2025-02-04 10:21:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:54.945616 | orchestrator | 2025-02-04 10:21:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:54.945755 | orchestrator | 2025-02-04 10:21:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:54.945806 | orchestrator | 2025-02-04 10:21:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:21:54.947042 | orchestrator | 2025-02-04 10:21:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:21:57.988510 | orchestrator | 2025-02-04 10:21:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:21:57.988616 | orchestrator | 2025-02-04 10:21:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:01.043947 | orchestrator | 2025-02-04 10:21:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:01.044078 | orchestrator | 2025-02-04 10:21:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:01.044118 | orchestrator | 2025-02-04 10:22:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:01.045528 | orchestrator | 2025-02-04 10:22:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:04.085519 | orchestrator | 2025-02-04 10:22:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:04.085681 | orchestrator | 2025-02-04 10:22:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:04.085823 | orchestrator | 2025-02-04 10:22:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:07.124318 | orchestrator | 2025-02-04 10:22:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:07.124500 | orchestrator | 2025-02-04 10:22:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:10.167391 | orchestrator | 2025-02-04 10:22:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:10.167503 | orchestrator | 2025-02-04 10:22:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:10.167545 | orchestrator | 2025-02-04 10:22:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:10.168406 | orchestrator | 2025-02-04 10:22:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:13.212121 | orchestrator | 2025-02-04 10:22:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:13.212323 | orchestrator | 2025-02-04 10:22:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:16.247565 | orchestrator | 2025-02-04 10:22:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:16.247692 | orchestrator | 2025-02-04 10:22:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:16.247729 | orchestrator | 2025-02-04 10:22:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:19.287713 | orchestrator | 2025-02-04 10:22:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:19.287843 | orchestrator | 2025-02-04 10:22:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:19.287886 | orchestrator | 2025-02-04 10:22:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:19.288332 | orchestrator | 2025-02-04 10:22:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:22.336871 | orchestrator | 2025-02-04 10:22:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:22.337010 | orchestrator | 2025-02-04 10:22:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:25.369288 | orchestrator | 2025-02-04 10:22:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:25.369363 | orchestrator | 2025-02-04 10:22:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:25.369378 | orchestrator | 2025-02-04 10:22:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:25.369409 | orchestrator | 2025-02-04 10:22:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:28.411131 | orchestrator | 2025-02-04 10:22:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:28.411309 | orchestrator | 2025-02-04 10:22:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:28.411907 | orchestrator | 2025-02-04 10:22:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:31.464383 | orchestrator | 2025-02-04 10:22:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:31.464560 | orchestrator | 2025-02-04 10:22:31 | INFO  | Task e4bf67a1-f897-4cc6-a981-2b4f23a2b93c is in state STARTED 2025-02-04 10:22:31.465512 | orchestrator | 2025-02-04 10:22:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:31.465545 | orchestrator | 2025-02-04 10:22:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:34.519939 | orchestrator | 2025-02-04 10:22:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:34.520070 | orchestrator | 2025-02-04 10:22:34 | INFO  | Task e4bf67a1-f897-4cc6-a981-2b4f23a2b93c is in state STARTED 2025-02-04 10:22:34.520377 | orchestrator | 2025-02-04 10:22:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:34.522151 | orchestrator | 2025-02-04 10:22:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:37.564078 | orchestrator | 2025-02-04 10:22:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:37.564256 | orchestrator | 2025-02-04 10:22:37 | INFO  | Task e4bf67a1-f897-4cc6-a981-2b4f23a2b93c is in state STARTED 2025-02-04 10:22:37.564506 | orchestrator | 2025-02-04 10:22:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:37.565343 | orchestrator | 2025-02-04 10:22:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:40.615561 | orchestrator | 2025-02-04 10:22:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:40.615698 | orchestrator | 2025-02-04 10:22:40 | INFO  | Task e4bf67a1-f897-4cc6-a981-2b4f23a2b93c is in state STARTED 2025-02-04 10:22:40.616468 | orchestrator | 2025-02-04 10:22:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:40.616511 | orchestrator | 2025-02-04 10:22:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:43.657857 | orchestrator | 2025-02-04 10:22:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:43.658069 | orchestrator | 2025-02-04 10:22:43 | INFO  | Task e4bf67a1-f897-4cc6-a981-2b4f23a2b93c is in state SUCCESS 2025-02-04 10:22:46.699406 | orchestrator | 2025-02-04 10:22:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:46.699531 | orchestrator | 2025-02-04 10:22:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:46.699551 | orchestrator | 2025-02-04 10:22:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:46.699585 | orchestrator | 2025-02-04 10:22:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:46.700598 | orchestrator | 2025-02-04 10:22:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:49.743000 | orchestrator | 2025-02-04 10:22:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:49.743099 | orchestrator | 2025-02-04 10:22:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:52.782742 | orchestrator | 2025-02-04 10:22:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:52.782865 | orchestrator | 2025-02-04 10:22:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:52.782898 | orchestrator | 2025-02-04 10:22:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:52.783620 | orchestrator | 2025-02-04 10:22:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:55.818469 | orchestrator | 2025-02-04 10:22:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:55.818580 | orchestrator | 2025-02-04 10:22:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:22:55.821462 | orchestrator | 2025-02-04 10:22:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:22:58.865866 | orchestrator | 2025-02-04 10:22:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:22:58.866076 | orchestrator | 2025-02-04 10:22:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:01.909424 | orchestrator | 2025-02-04 10:22:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:01.909560 | orchestrator | 2025-02-04 10:22:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:01.909592 | orchestrator | 2025-02-04 10:23:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:01.909650 | orchestrator | 2025-02-04 10:23:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:04.955853 | orchestrator | 2025-02-04 10:23:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:04.955944 | orchestrator | 2025-02-04 10:23:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:04.957723 | orchestrator | 2025-02-04 10:23:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:04.957997 | orchestrator | 2025-02-04 10:23:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:08.002298 | orchestrator | 2025-02-04 10:23:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:08.002405 | orchestrator | 2025-02-04 10:23:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:11.053735 | orchestrator | 2025-02-04 10:23:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:11.053826 | orchestrator | 2025-02-04 10:23:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:11.055056 | orchestrator | 2025-02-04 10:23:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:11.055115 | orchestrator | 2025-02-04 10:23:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:14.117361 | orchestrator | 2025-02-04 10:23:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:14.118113 | orchestrator | 2025-02-04 10:23:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:14.118248 | orchestrator | 2025-02-04 10:23:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:17.163440 | orchestrator | 2025-02-04 10:23:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:17.163688 | orchestrator | 2025-02-04 10:23:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:20.209244 | orchestrator | 2025-02-04 10:23:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:20.209345 | orchestrator | 2025-02-04 10:23:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:20.209676 | orchestrator | 2025-02-04 10:23:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:23.256377 | orchestrator | 2025-02-04 10:23:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:23.256474 | orchestrator | 2025-02-04 10:23:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:23.257085 | orchestrator | 2025-02-04 10:23:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:26.304909 | orchestrator | 2025-02-04 10:23:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:26.305010 | orchestrator | 2025-02-04 10:23:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:29.353442 | orchestrator | 2025-02-04 10:23:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:29.353549 | orchestrator | 2025-02-04 10:23:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:29.353569 | orchestrator | 2025-02-04 10:23:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:29.353607 | orchestrator | 2025-02-04 10:23:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:32.398765 | orchestrator | 2025-02-04 10:23:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:32.398893 | orchestrator | 2025-02-04 10:23:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:32.399511 | orchestrator | 2025-02-04 10:23:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:35.446966 | orchestrator | 2025-02-04 10:23:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:35.447066 | orchestrator | 2025-02-04 10:23:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:35.447246 | orchestrator | 2025-02-04 10:23:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:38.496919 | orchestrator | 2025-02-04 10:23:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:38.497055 | orchestrator | 2025-02-04 10:23:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:41.549103 | orchestrator | 2025-02-04 10:23:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:41.549290 | orchestrator | 2025-02-04 10:23:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:41.549386 | orchestrator | 2025-02-04 10:23:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:41.549486 | orchestrator | 2025-02-04 10:23:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:44.604990 | orchestrator | 2025-02-04 10:23:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:44.605132 | orchestrator | 2025-02-04 10:23:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:44.606702 | orchestrator | 2025-02-04 10:23:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:47.657366 | orchestrator | 2025-02-04 10:23:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:47.657532 | orchestrator | 2025-02-04 10:23:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:47.659098 | orchestrator | 2025-02-04 10:23:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:50.708618 | orchestrator | 2025-02-04 10:23:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:50.708760 | orchestrator | 2025-02-04 10:23:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:53.756060 | orchestrator | 2025-02-04 10:23:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:53.756161 | orchestrator | 2025-02-04 10:23:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:53.756184 | orchestrator | 2025-02-04 10:23:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:53.757751 | orchestrator | 2025-02-04 10:23:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:56.805113 | orchestrator | 2025-02-04 10:23:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:56.805337 | orchestrator | 2025-02-04 10:23:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:56.806531 | orchestrator | 2025-02-04 10:23:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:23:59.861262 | orchestrator | 2025-02-04 10:23:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:23:59.861378 | orchestrator | 2025-02-04 10:23:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:23:59.864633 | orchestrator | 2025-02-04 10:23:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:02.918177 | orchestrator | 2025-02-04 10:23:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:02.918398 | orchestrator | 2025-02-04 10:24:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:02.918497 | orchestrator | 2025-02-04 10:24:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:05.963075 | orchestrator | 2025-02-04 10:24:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:05.963247 | orchestrator | 2025-02-04 10:24:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:05.964826 | orchestrator | 2025-02-04 10:24:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:09.011955 | orchestrator | 2025-02-04 10:24:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:09.012179 | orchestrator | 2025-02-04 10:24:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:09.012304 | orchestrator | 2025-02-04 10:24:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:12.059583 | orchestrator | 2025-02-04 10:24:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:12.059734 | orchestrator | 2025-02-04 10:24:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:12.059803 | orchestrator | 2025-02-04 10:24:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:15.110207 | orchestrator | 2025-02-04 10:24:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:15.110346 | orchestrator | 2025-02-04 10:24:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:15.111115 | orchestrator | 2025-02-04 10:24:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:18.161865 | orchestrator | 2025-02-04 10:24:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:18.161970 | orchestrator | 2025-02-04 10:24:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:18.162868 | orchestrator | 2025-02-04 10:24:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:21.211320 | orchestrator | 2025-02-04 10:24:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:21.211450 | orchestrator | 2025-02-04 10:24:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:21.211801 | orchestrator | 2025-02-04 10:24:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:24.262516 | orchestrator | 2025-02-04 10:24:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:24.262662 | orchestrator | 2025-02-04 10:24:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:24.264500 | orchestrator | 2025-02-04 10:24:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:27.310765 | orchestrator | 2025-02-04 10:24:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:27.310909 | orchestrator | 2025-02-04 10:24:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:27.311618 | orchestrator | 2025-02-04 10:24:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:30.366744 | orchestrator | 2025-02-04 10:24:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:30.366900 | orchestrator | 2025-02-04 10:24:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:30.367671 | orchestrator | 2025-02-04 10:24:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:33.420910 | orchestrator | 2025-02-04 10:24:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:33.421017 | orchestrator | 2025-02-04 10:24:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:33.422753 | orchestrator | 2025-02-04 10:24:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:36.473015 | orchestrator | 2025-02-04 10:24:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:36.473128 | orchestrator | 2025-02-04 10:24:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:36.474331 | orchestrator | 2025-02-04 10:24:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:36.474488 | orchestrator | 2025-02-04 10:24:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:39.527561 | orchestrator | 2025-02-04 10:24:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:39.529123 | orchestrator | 2025-02-04 10:24:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:42.572957 | orchestrator | 2025-02-04 10:24:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:42.573046 | orchestrator | 2025-02-04 10:24:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:42.574363 | orchestrator | 2025-02-04 10:24:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:45.610446 | orchestrator | 2025-02-04 10:24:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:45.610534 | orchestrator | 2025-02-04 10:24:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:45.611366 | orchestrator | 2025-02-04 10:24:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:45.611424 | orchestrator | 2025-02-04 10:24:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:48.661688 | orchestrator | 2025-02-04 10:24:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:51.715556 | orchestrator | 2025-02-04 10:24:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:51.715675 | orchestrator | 2025-02-04 10:24:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:51.715712 | orchestrator | 2025-02-04 10:24:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:51.716076 | orchestrator | 2025-02-04 10:24:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:54.767833 | orchestrator | 2025-02-04 10:24:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:54.767988 | orchestrator | 2025-02-04 10:24:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:54.768694 | orchestrator | 2025-02-04 10:24:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:54.768882 | orchestrator | 2025-02-04 10:24:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:24:57.814311 | orchestrator | 2025-02-04 10:24:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:24:57.815308 | orchestrator | 2025-02-04 10:24:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:24:57.815880 | orchestrator | 2025-02-04 10:24:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:00.867707 | orchestrator | 2025-02-04 10:25:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:03.914792 | orchestrator | 2025-02-04 10:25:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:03.914954 | orchestrator | 2025-02-04 10:25:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:03.915003 | orchestrator | 2025-02-04 10:25:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:03.915971 | orchestrator | 2025-02-04 10:25:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:03.916047 | orchestrator | 2025-02-04 10:25:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:06.963859 | orchestrator | 2025-02-04 10:25:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:06.964927 | orchestrator | 2025-02-04 10:25:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:10.014793 | orchestrator | 2025-02-04 10:25:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:10.014933 | orchestrator | 2025-02-04 10:25:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:13.067591 | orchestrator | 2025-02-04 10:25:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:13.067708 | orchestrator | 2025-02-04 10:25:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:13.067765 | orchestrator | 2025-02-04 10:25:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:13.068168 | orchestrator | 2025-02-04 10:25:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:16.118558 | orchestrator | 2025-02-04 10:25:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:16.118727 | orchestrator | 2025-02-04 10:25:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:19.165593 | orchestrator | 2025-02-04 10:25:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:19.165703 | orchestrator | 2025-02-04 10:25:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:19.165734 | orchestrator | 2025-02-04 10:25:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:19.166379 | orchestrator | 2025-02-04 10:25:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:19.166562 | orchestrator | 2025-02-04 10:25:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:22.221397 | orchestrator | 2025-02-04 10:25:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:22.222878 | orchestrator | 2025-02-04 10:25:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:25.268353 | orchestrator | 2025-02-04 10:25:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:25.268482 | orchestrator | 2025-02-04 10:25:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:25.268560 | orchestrator | 2025-02-04 10:25:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:28.309621 | orchestrator | 2025-02-04 10:25:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:28.309749 | orchestrator | 2025-02-04 10:25:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:31.350638 | orchestrator | 2025-02-04 10:25:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:31.350763 | orchestrator | 2025-02-04 10:25:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:31.350803 | orchestrator | 2025-02-04 10:25:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:31.351305 | orchestrator | 2025-02-04 10:25:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:31.351340 | orchestrator | 2025-02-04 10:25:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:34.401944 | orchestrator | 2025-02-04 10:25:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:34.402469 | orchestrator | 2025-02-04 10:25:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:37.455393 | orchestrator | 2025-02-04 10:25:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:37.455529 | orchestrator | 2025-02-04 10:25:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:37.456007 | orchestrator | 2025-02-04 10:25:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:40.511031 | orchestrator | 2025-02-04 10:25:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:40.511193 | orchestrator | 2025-02-04 10:25:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:43.559790 | orchestrator | 2025-02-04 10:25:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:43.559914 | orchestrator | 2025-02-04 10:25:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:43.559953 | orchestrator | 2025-02-04 10:25:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:43.561069 | orchestrator | 2025-02-04 10:25:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:46.612457 | orchestrator | 2025-02-04 10:25:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:46.612600 | orchestrator | 2025-02-04 10:25:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:46.613987 | orchestrator | 2025-02-04 10:25:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:49.659884 | orchestrator | 2025-02-04 10:25:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:49.660019 | orchestrator | 2025-02-04 10:25:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:49.661589 | orchestrator | 2025-02-04 10:25:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:52.707433 | orchestrator | 2025-02-04 10:25:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:52.707597 | orchestrator | 2025-02-04 10:25:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:52.708545 | orchestrator | 2025-02-04 10:25:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:55.757529 | orchestrator | 2025-02-04 10:25:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:55.757733 | orchestrator | 2025-02-04 10:25:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:55.757862 | orchestrator | 2025-02-04 10:25:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:25:58.810903 | orchestrator | 2025-02-04 10:25:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:25:58.811056 | orchestrator | 2025-02-04 10:25:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:25:58.812962 | orchestrator | 2025-02-04 10:25:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:01.869298 | orchestrator | 2025-02-04 10:25:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:01.869458 | orchestrator | 2025-02-04 10:26:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:04.922350 | orchestrator | 2025-02-04 10:26:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:04.922463 | orchestrator | 2025-02-04 10:26:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:04.922538 | orchestrator | 2025-02-04 10:26:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:04.922604 | orchestrator | 2025-02-04 10:26:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:07.968772 | orchestrator | 2025-02-04 10:26:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:07.968907 | orchestrator | 2025-02-04 10:26:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:07.969560 | orchestrator | 2025-02-04 10:26:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:11.015841 | orchestrator | 2025-02-04 10:26:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:11.015963 | orchestrator | 2025-02-04 10:26:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:11.016349 | orchestrator | 2025-02-04 10:26:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:14.064764 | orchestrator | 2025-02-04 10:26:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:14.064935 | orchestrator | 2025-02-04 10:26:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:14.065450 | orchestrator | 2025-02-04 10:26:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:17.118432 | orchestrator | 2025-02-04 10:26:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:17.118558 | orchestrator | 2025-02-04 10:26:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:17.119422 | orchestrator | 2025-02-04 10:26:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:20.171091 | orchestrator | 2025-02-04 10:26:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:20.171289 | orchestrator | 2025-02-04 10:26:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:20.173553 | orchestrator | 2025-02-04 10:26:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:23.222784 | orchestrator | 2025-02-04 10:26:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:23.222887 | orchestrator | 2025-02-04 10:26:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:23.223300 | orchestrator | 2025-02-04 10:26:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:23.223569 | orchestrator | 2025-02-04 10:26:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:26.272958 | orchestrator | 2025-02-04 10:26:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:26.274270 | orchestrator | 2025-02-04 10:26:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:29.319020 | orchestrator | 2025-02-04 10:26:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:29.319157 | orchestrator | 2025-02-04 10:26:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:29.321169 | orchestrator | 2025-02-04 10:26:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:29.321289 | orchestrator | 2025-02-04 10:26:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:32.372949 | orchestrator | 2025-02-04 10:26:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:35.416705 | orchestrator | 2025-02-04 10:26:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:35.416826 | orchestrator | 2025-02-04 10:26:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:35.416865 | orchestrator | 2025-02-04 10:26:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:35.417276 | orchestrator | 2025-02-04 10:26:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:38.461826 | orchestrator | 2025-02-04 10:26:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:38.461979 | orchestrator | 2025-02-04 10:26:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:38.463505 | orchestrator | 2025-02-04 10:26:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:41.513908 | orchestrator | 2025-02-04 10:26:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:41.514173 | orchestrator | 2025-02-04 10:26:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:41.515318 | orchestrator | 2025-02-04 10:26:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:44.569510 | orchestrator | 2025-02-04 10:26:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:44.569654 | orchestrator | 2025-02-04 10:26:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:44.570863 | orchestrator | 2025-02-04 10:26:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:47.636857 | orchestrator | 2025-02-04 10:26:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:47.636953 | orchestrator | 2025-02-04 10:26:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:50.683755 | orchestrator | 2025-02-04 10:26:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:50.683900 | orchestrator | 2025-02-04 10:26:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:50.684567 | orchestrator | 2025-02-04 10:26:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:53.740944 | orchestrator | 2025-02-04 10:26:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:53.741020 | orchestrator | 2025-02-04 10:26:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:53.741038 | orchestrator | 2025-02-04 10:26:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:53.741800 | orchestrator | 2025-02-04 10:26:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:56.803101 | orchestrator | 2025-02-04 10:26:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:56.803251 | orchestrator | 2025-02-04 10:26:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:56.803498 | orchestrator | 2025-02-04 10:26:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:26:56.803588 | orchestrator | 2025-02-04 10:26:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:26:59.851590 | orchestrator | 2025-02-04 10:26:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:26:59.852780 | orchestrator | 2025-02-04 10:26:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:02.892196 | orchestrator | 2025-02-04 10:26:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:02.892316 | orchestrator | 2025-02-04 10:27:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:02.892394 | orchestrator | 2025-02-04 10:27:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:05.949070 | orchestrator | 2025-02-04 10:27:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:05.949176 | orchestrator | 2025-02-04 10:27:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:05.949708 | orchestrator | 2025-02-04 10:27:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:05.949890 | orchestrator | 2025-02-04 10:27:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:09.007792 | orchestrator | 2025-02-04 10:27:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:09.008546 | orchestrator | 2025-02-04 10:27:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:12.053182 | orchestrator | 2025-02-04 10:27:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:12.053275 | orchestrator | 2025-02-04 10:27:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:12.054336 | orchestrator | 2025-02-04 10:27:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:12.054454 | orchestrator | 2025-02-04 10:27:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:15.101982 | orchestrator | 2025-02-04 10:27:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:15.102926 | orchestrator | 2025-02-04 10:27:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:15.103021 | orchestrator | 2025-02-04 10:27:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:18.157422 | orchestrator | 2025-02-04 10:27:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:18.158366 | orchestrator | 2025-02-04 10:27:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:21.207072 | orchestrator | 2025-02-04 10:27:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:21.207279 | orchestrator | 2025-02-04 10:27:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:21.208171 | orchestrator | 2025-02-04 10:27:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:24.263022 | orchestrator | 2025-02-04 10:27:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:24.263162 | orchestrator | 2025-02-04 10:27:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:24.263860 | orchestrator | 2025-02-04 10:27:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:27.310869 | orchestrator | 2025-02-04 10:27:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:27.311009 | orchestrator | 2025-02-04 10:27:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:27.311470 | orchestrator | 2025-02-04 10:27:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:30.352978 | orchestrator | 2025-02-04 10:27:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:30.353098 | orchestrator | 2025-02-04 10:27:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:33.408986 | orchestrator | 2025-02-04 10:27:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:33.409196 | orchestrator | 2025-02-04 10:27:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:33.409232 | orchestrator | 2025-02-04 10:27:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:33.409310 | orchestrator | 2025-02-04 10:27:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:36.457988 | orchestrator | 2025-02-04 10:27:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:36.458214 | orchestrator | 2025-02-04 10:27:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:39.512092 | orchestrator | 2025-02-04 10:27:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:39.512255 | orchestrator | 2025-02-04 10:27:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:39.512292 | orchestrator | 2025-02-04 10:27:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:39.514637 | orchestrator | 2025-02-04 10:27:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:42.560183 | orchestrator | 2025-02-04 10:27:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:42.560315 | orchestrator | 2025-02-04 10:27:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:42.561275 | orchestrator | 2025-02-04 10:27:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:45.611854 | orchestrator | 2025-02-04 10:27:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:45.611956 | orchestrator | 2025-02-04 10:27:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:45.612410 | orchestrator | 2025-02-04 10:27:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:48.666279 | orchestrator | 2025-02-04 10:27:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:48.666434 | orchestrator | 2025-02-04 10:27:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:48.666631 | orchestrator | 2025-02-04 10:27:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:51.719156 | orchestrator | 2025-02-04 10:27:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:51.719300 | orchestrator | 2025-02-04 10:27:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:54.764753 | orchestrator | 2025-02-04 10:27:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:54.764886 | orchestrator | 2025-02-04 10:27:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:54.764954 | orchestrator | 2025-02-04 10:27:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:27:54.768375 | orchestrator | 2025-02-04 10:27:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:27:57.821667 | orchestrator | 2025-02-04 10:27:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:27:57.821785 | orchestrator | 2025-02-04 10:27:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:00.873208 | orchestrator | 2025-02-04 10:27:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:00.873328 | orchestrator | 2025-02-04 10:27:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:00.873365 | orchestrator | 2025-02-04 10:28:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:00.881229 | orchestrator | 2025-02-04 10:28:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:03.929180 | orchestrator | 2025-02-04 10:28:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:03.929309 | orchestrator | 2025-02-04 10:28:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:03.930322 | orchestrator | 2025-02-04 10:28:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:06.977396 | orchestrator | 2025-02-04 10:28:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:06.977527 | orchestrator | 2025-02-04 10:28:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:06.977725 | orchestrator | 2025-02-04 10:28:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:10.022747 | orchestrator | 2025-02-04 10:28:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:10.022882 | orchestrator | 2025-02-04 10:28:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:10.025359 | orchestrator | 2025-02-04 10:28:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:10.025894 | orchestrator | 2025-02-04 10:28:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:13.074857 | orchestrator | 2025-02-04 10:28:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:13.076084 | orchestrator | 2025-02-04 10:28:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:16.120377 | orchestrator | 2025-02-04 10:28:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:16.120515 | orchestrator | 2025-02-04 10:28:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:19.167497 | orchestrator | 2025-02-04 10:28:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:19.167591 | orchestrator | 2025-02-04 10:28:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:19.167620 | orchestrator | 2025-02-04 10:28:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:19.168254 | orchestrator | 2025-02-04 10:28:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:22.219626 | orchestrator | 2025-02-04 10:28:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:22.219774 | orchestrator | 2025-02-04 10:28:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:22.220091 | orchestrator | 2025-02-04 10:28:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:25.274821 | orchestrator | 2025-02-04 10:28:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:25.274967 | orchestrator | 2025-02-04 10:28:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:25.276031 | orchestrator | 2025-02-04 10:28:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:28.323195 | orchestrator | 2025-02-04 10:28:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:28.323336 | orchestrator | 2025-02-04 10:28:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:28.325165 | orchestrator | 2025-02-04 10:28:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:31.378148 | orchestrator | 2025-02-04 10:28:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:31.378296 | orchestrator | 2025-02-04 10:28:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:31.378515 | orchestrator | 2025-02-04 10:28:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:34.433633 | orchestrator | 2025-02-04 10:28:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:34.433805 | orchestrator | 2025-02-04 10:28:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:34.435124 | orchestrator | 2025-02-04 10:28:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:37.495160 | orchestrator | 2025-02-04 10:28:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:37.495305 | orchestrator | 2025-02-04 10:28:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:37.496126 | orchestrator | 2025-02-04 10:28:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:40.549560 | orchestrator | 2025-02-04 10:28:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:40.549701 | orchestrator | 2025-02-04 10:28:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:43.601265 | orchestrator | 2025-02-04 10:28:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:43.601356 | orchestrator | 2025-02-04 10:28:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:43.601377 | orchestrator | 2025-02-04 10:28:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:43.602055 | orchestrator | 2025-02-04 10:28:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:46.654007 | orchestrator | 2025-02-04 10:28:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:46.654909 | orchestrator | 2025-02-04 10:28:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:49.703013 | orchestrator | 2025-02-04 10:28:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:49.703167 | orchestrator | 2025-02-04 10:28:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:49.703204 | orchestrator | 2025-02-04 10:28:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:49.703784 | orchestrator | 2025-02-04 10:28:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:52.753057 | orchestrator | 2025-02-04 10:28:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:52.753214 | orchestrator | 2025-02-04 10:28:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:52.754222 | orchestrator | 2025-02-04 10:28:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:52.754326 | orchestrator | 2025-02-04 10:28:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:55.796518 | orchestrator | 2025-02-04 10:28:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:55.797871 | orchestrator | 2025-02-04 10:28:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:28:58.851165 | orchestrator | 2025-02-04 10:28:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:28:58.851298 | orchestrator | 2025-02-04 10:28:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:28:58.852209 | orchestrator | 2025-02-04 10:28:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:01.897965 | orchestrator | 2025-02-04 10:28:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:01.898226 | orchestrator | 2025-02-04 10:29:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:01.898585 | orchestrator | 2025-02-04 10:29:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:04.949310 | orchestrator | 2025-02-04 10:29:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:04.949447 | orchestrator | 2025-02-04 10:29:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:04.951276 | orchestrator | 2025-02-04 10:29:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:08.001331 | orchestrator | 2025-02-04 10:29:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:08.001418 | orchestrator | 2025-02-04 10:29:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:08.002955 | orchestrator | 2025-02-04 10:29:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:08.003055 | orchestrator | 2025-02-04 10:29:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:11.056915 | orchestrator | 2025-02-04 10:29:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:11.057687 | orchestrator | 2025-02-04 10:29:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:11.057828 | orchestrator | 2025-02-04 10:29:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:14.100816 | orchestrator | 2025-02-04 10:29:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:14.102814 | orchestrator | 2025-02-04 10:29:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:14.102980 | orchestrator | 2025-02-04 10:29:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:17.151638 | orchestrator | 2025-02-04 10:29:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:17.152606 | orchestrator | 2025-02-04 10:29:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:20.201340 | orchestrator | 2025-02-04 10:29:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:20.201480 | orchestrator | 2025-02-04 10:29:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:20.201778 | orchestrator | 2025-02-04 10:29:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:20.201999 | orchestrator | 2025-02-04 10:29:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:23.250676 | orchestrator | 2025-02-04 10:29:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:23.252069 | orchestrator | 2025-02-04 10:29:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:26.304838 | orchestrator | 2025-02-04 10:29:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:26.304971 | orchestrator | 2025-02-04 10:29:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:26.305660 | orchestrator | 2025-02-04 10:29:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:29.352483 | orchestrator | 2025-02-04 10:29:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:29.352603 | orchestrator | 2025-02-04 10:29:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:29.353462 | orchestrator | 2025-02-04 10:29:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:32.403805 | orchestrator | 2025-02-04 10:29:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:32.403989 | orchestrator | 2025-02-04 10:29:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:32.404126 | orchestrator | 2025-02-04 10:29:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:35.457924 | orchestrator | 2025-02-04 10:29:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:35.458159 | orchestrator | 2025-02-04 10:29:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:35.458442 | orchestrator | 2025-02-04 10:29:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:35.459352 | orchestrator | 2025-02-04 10:29:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:38.504038 | orchestrator | 2025-02-04 10:29:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:38.505474 | orchestrator | 2025-02-04 10:29:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:41.557654 | orchestrator | 2025-02-04 10:29:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:41.557834 | orchestrator | 2025-02-04 10:29:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:41.557925 | orchestrator | 2025-02-04 10:29:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:44.614266 | orchestrator | 2025-02-04 10:29:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:44.614384 | orchestrator | 2025-02-04 10:29:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:44.614943 | orchestrator | 2025-02-04 10:29:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:47.676559 | orchestrator | 2025-02-04 10:29:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:47.676685 | orchestrator | 2025-02-04 10:29:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:47.677931 | orchestrator | 2025-02-04 10:29:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:47.678264 | orchestrator | 2025-02-04 10:29:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:50.741041 | orchestrator | 2025-02-04 10:29:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:50.741284 | orchestrator | 2025-02-04 10:29:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:50.741348 | orchestrator | 2025-02-04 10:29:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:53.802280 | orchestrator | 2025-02-04 10:29:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:53.803289 | orchestrator | 2025-02-04 10:29:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:56.856031 | orchestrator | 2025-02-04 10:29:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:56.856192 | orchestrator | 2025-02-04 10:29:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:56.857962 | orchestrator | 2025-02-04 10:29:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:29:59.908279 | orchestrator | 2025-02-04 10:29:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:29:59.908445 | orchestrator | 2025-02-04 10:29:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:29:59.909683 | orchestrator | 2025-02-04 10:29:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:02.963868 | orchestrator | 2025-02-04 10:29:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:02.964002 | orchestrator | 2025-02-04 10:30:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:02.965873 | orchestrator | 2025-02-04 10:30:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:06.018610 | orchestrator | 2025-02-04 10:30:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:06.018746 | orchestrator | 2025-02-04 10:30:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:06.019311 | orchestrator | 2025-02-04 10:30:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:09.061438 | orchestrator | 2025-02-04 10:30:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:09.061532 | orchestrator | 2025-02-04 10:30:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:09.062364 | orchestrator | 2025-02-04 10:30:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:12.111643 | orchestrator | 2025-02-04 10:30:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:12.111725 | orchestrator | 2025-02-04 10:30:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:12.113499 | orchestrator | 2025-02-04 10:30:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:12.113607 | orchestrator | 2025-02-04 10:30:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:15.158431 | orchestrator | 2025-02-04 10:30:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:15.160366 | orchestrator | 2025-02-04 10:30:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:18.209198 | orchestrator | 2025-02-04 10:30:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:18.209337 | orchestrator | 2025-02-04 10:30:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:18.209786 | orchestrator | 2025-02-04 10:30:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:21.265861 | orchestrator | 2025-02-04 10:30:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:21.265993 | orchestrator | 2025-02-04 10:30:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:21.266521 | orchestrator | 2025-02-04 10:30:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:24.317122 | orchestrator | 2025-02-04 10:30:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:24.317282 | orchestrator | 2025-02-04 10:30:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:27.366185 | orchestrator | 2025-02-04 10:30:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:27.366297 | orchestrator | 2025-02-04 10:30:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:27.366337 | orchestrator | 2025-02-04 10:30:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:27.367651 | orchestrator | 2025-02-04 10:30:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:30.413197 | orchestrator | 2025-02-04 10:30:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:30.413331 | orchestrator | 2025-02-04 10:30:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:30.415356 | orchestrator | 2025-02-04 10:30:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:33.455653 | orchestrator | 2025-02-04 10:30:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:33.455747 | orchestrator | 2025-02-04 10:30:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:33.456672 | orchestrator | 2025-02-04 10:30:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:33.456777 | orchestrator | 2025-02-04 10:30:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:36.507937 | orchestrator | 2025-02-04 10:30:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:36.509201 | orchestrator | 2025-02-04 10:30:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:39.560473 | orchestrator | 2025-02-04 10:30:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:39.560613 | orchestrator | 2025-02-04 10:30:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:39.562507 | orchestrator | 2025-02-04 10:30:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:42.610552 | orchestrator | 2025-02-04 10:30:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:42.610682 | orchestrator | 2025-02-04 10:30:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:42.612377 | orchestrator | 2025-02-04 10:30:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:45.663835 | orchestrator | 2025-02-04 10:30:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:45.663964 | orchestrator | 2025-02-04 10:30:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:45.665683 | orchestrator | 2025-02-04 10:30:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:45.666010 | orchestrator | 2025-02-04 10:30:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:48.713871 | orchestrator | 2025-02-04 10:30:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:48.715150 | orchestrator | 2025-02-04 10:30:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:48.715449 | orchestrator | 2025-02-04 10:30:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:51.763391 | orchestrator | 2025-02-04 10:30:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:51.763819 | orchestrator | 2025-02-04 10:30:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:54.814990 | orchestrator | 2025-02-04 10:30:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:54.815171 | orchestrator | 2025-02-04 10:30:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:54.817481 | orchestrator | 2025-02-04 10:30:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:30:57.864540 | orchestrator | 2025-02-04 10:30:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:30:57.864710 | orchestrator | 2025-02-04 10:30:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:30:57.865094 | orchestrator | 2025-02-04 10:30:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:00.915823 | orchestrator | 2025-02-04 10:30:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:00.915930 | orchestrator | 2025-02-04 10:31:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:00.916873 | orchestrator | 2025-02-04 10:31:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:03.959816 | orchestrator | 2025-02-04 10:31:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:03.959952 | orchestrator | 2025-02-04 10:31:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:03.960545 | orchestrator | 2025-02-04 10:31:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:07.016528 | orchestrator | 2025-02-04 10:31:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:07.016656 | orchestrator | 2025-02-04 10:31:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:07.017017 | orchestrator | 2025-02-04 10:31:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:10.061554 | orchestrator | 2025-02-04 10:31:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:10.061682 | orchestrator | 2025-02-04 10:31:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:10.063255 | orchestrator | 2025-02-04 10:31:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:13.112553 | orchestrator | 2025-02-04 10:31:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:13.112678 | orchestrator | 2025-02-04 10:31:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:13.113660 | orchestrator | 2025-02-04 10:31:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:16.164463 | orchestrator | 2025-02-04 10:31:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:16.164594 | orchestrator | 2025-02-04 10:31:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:16.167140 | orchestrator | 2025-02-04 10:31:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:19.227371 | orchestrator | 2025-02-04 10:31:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:19.227555 | orchestrator | 2025-02-04 10:31:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:19.228171 | orchestrator | 2025-02-04 10:31:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:22.279772 | orchestrator | 2025-02-04 10:31:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:22.279888 | orchestrator | 2025-02-04 10:31:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:22.282912 | orchestrator | 2025-02-04 10:31:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:25.331890 | orchestrator | 2025-02-04 10:31:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:25.332018 | orchestrator | 2025-02-04 10:31:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:28.379367 | orchestrator | 2025-02-04 10:31:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:28.379495 | orchestrator | 2025-02-04 10:31:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:28.379533 | orchestrator | 2025-02-04 10:31:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:28.380464 | orchestrator | 2025-02-04 10:31:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:28.380654 | orchestrator | 2025-02-04 10:31:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:31.425587 | orchestrator | 2025-02-04 10:31:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:31.426260 | orchestrator | 2025-02-04 10:31:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:34.476134 | orchestrator | 2025-02-04 10:31:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:34.476271 | orchestrator | 2025-02-04 10:31:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:34.477345 | orchestrator | 2025-02-04 10:31:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:37.529393 | orchestrator | 2025-02-04 10:31:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:37.529507 | orchestrator | 2025-02-04 10:31:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:37.529585 | orchestrator | 2025-02-04 10:31:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:40.577203 | orchestrator | 2025-02-04 10:31:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:40.577341 | orchestrator | 2025-02-04 10:31:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:40.578325 | orchestrator | 2025-02-04 10:31:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:43.632384 | orchestrator | 2025-02-04 10:31:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:43.632517 | orchestrator | 2025-02-04 10:31:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:43.632696 | orchestrator | 2025-02-04 10:31:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:43.632809 | orchestrator | 2025-02-04 10:31:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:46.685508 | orchestrator | 2025-02-04 10:31:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:46.686446 | orchestrator | 2025-02-04 10:31:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:49.736735 | orchestrator | 2025-02-04 10:31:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:49.736863 | orchestrator | 2025-02-04 10:31:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:49.737557 | orchestrator | 2025-02-04 10:31:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:49.737647 | orchestrator | 2025-02-04 10:31:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:52.788245 | orchestrator | 2025-02-04 10:31:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:52.789267 | orchestrator | 2025-02-04 10:31:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:55.836601 | orchestrator | 2025-02-04 10:31:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:55.836733 | orchestrator | 2025-02-04 10:31:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:55.837394 | orchestrator | 2025-02-04 10:31:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:31:58.892774 | orchestrator | 2025-02-04 10:31:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:31:58.892895 | orchestrator | 2025-02-04 10:31:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:31:58.895495 | orchestrator | 2025-02-04 10:31:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:01.943769 | orchestrator | 2025-02-04 10:31:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:01.943905 | orchestrator | 2025-02-04 10:32:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:01.944221 | orchestrator | 2025-02-04 10:32:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:04.994898 | orchestrator | 2025-02-04 10:32:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:04.995103 | orchestrator | 2025-02-04 10:32:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:04.997037 | orchestrator | 2025-02-04 10:32:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:04.997480 | orchestrator | 2025-02-04 10:32:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:08.041949 | orchestrator | 2025-02-04 10:32:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:11.093713 | orchestrator | 2025-02-04 10:32:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:11.093838 | orchestrator | 2025-02-04 10:32:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:11.093876 | orchestrator | 2025-02-04 10:32:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:11.095204 | orchestrator | 2025-02-04 10:32:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:14.143116 | orchestrator | 2025-02-04 10:32:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:14.143292 | orchestrator | 2025-02-04 10:32:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:14.143384 | orchestrator | 2025-02-04 10:32:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:17.197263 | orchestrator | 2025-02-04 10:32:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:17.197406 | orchestrator | 2025-02-04 10:32:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:17.197850 | orchestrator | 2025-02-04 10:32:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:17.197932 | orchestrator | 2025-02-04 10:32:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:20.247034 | orchestrator | 2025-02-04 10:32:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:20.247257 | orchestrator | 2025-02-04 10:32:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:23.306320 | orchestrator | 2025-02-04 10:32:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:23.306455 | orchestrator | 2025-02-04 10:32:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:23.310238 | orchestrator | 2025-02-04 10:32:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:23.310298 | orchestrator | 2025-02-04 10:32:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:26.347393 | orchestrator | 2025-02-04 10:32:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:26.348965 | orchestrator | 2025-02-04 10:32:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:29.398864 | orchestrator | 2025-02-04 10:32:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:29.399043 | orchestrator | 2025-02-04 10:32:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:29.399689 | orchestrator | 2025-02-04 10:32:29 | INFO  | Task 4e41e9a5-49dc-4f5b-8090-1d5354b607c9 is in state STARTED 2025-02-04 10:32:29.402193 | orchestrator | 2025-02-04 10:32:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:32.477454 | orchestrator | 2025-02-04 10:32:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:32.477559 | orchestrator | 2025-02-04 10:32:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:32.479112 | orchestrator | 2025-02-04 10:32:32 | INFO  | Task 4e41e9a5-49dc-4f5b-8090-1d5354b607c9 is in state STARTED 2025-02-04 10:32:32.480966 | orchestrator | 2025-02-04 10:32:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:32.481284 | orchestrator | 2025-02-04 10:32:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:35.537182 | orchestrator | 2025-02-04 10:32:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:35.539025 | orchestrator | 2025-02-04 10:32:35 | INFO  | Task 4e41e9a5-49dc-4f5b-8090-1d5354b607c9 is in state STARTED 2025-02-04 10:32:35.541092 | orchestrator | 2025-02-04 10:32:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:38.595984 | orchestrator | 2025-02-04 10:32:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:38.596209 | orchestrator | 2025-02-04 10:32:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:38.596299 | orchestrator | 2025-02-04 10:32:38 | INFO  | Task 4e41e9a5-49dc-4f5b-8090-1d5354b607c9 is in state STARTED 2025-02-04 10:32:38.597312 | orchestrator | 2025-02-04 10:32:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:41.646383 | orchestrator | 2025-02-04 10:32:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:41.646477 | orchestrator | 2025-02-04 10:32:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:41.648822 | orchestrator | 2025-02-04 10:32:41 | INFO  | Task 4e41e9a5-49dc-4f5b-8090-1d5354b607c9 is in state SUCCESS 2025-02-04 10:32:41.648844 | orchestrator | 2025-02-04 10:32:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:44.694733 | orchestrator | 2025-02-04 10:32:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:44.694860 | orchestrator | 2025-02-04 10:32:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:44.695774 | orchestrator | 2025-02-04 10:32:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:44.697478 | orchestrator | 2025-02-04 10:32:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:47.743092 | orchestrator | 2025-02-04 10:32:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:47.744642 | orchestrator | 2025-02-04 10:32:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:50.791180 | orchestrator | 2025-02-04 10:32:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:50.791320 | orchestrator | 2025-02-04 10:32:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:50.791637 | orchestrator | 2025-02-04 10:32:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:50.791755 | orchestrator | 2025-02-04 10:32:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:53.828685 | orchestrator | 2025-02-04 10:32:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:53.830156 | orchestrator | 2025-02-04 10:32:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:56.884116 | orchestrator | 2025-02-04 10:32:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:56.884254 | orchestrator | 2025-02-04 10:32:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:56.886078 | orchestrator | 2025-02-04 10:32:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:32:59.940575 | orchestrator | 2025-02-04 10:32:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:32:59.940713 | orchestrator | 2025-02-04 10:32:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:32:59.941984 | orchestrator | 2025-02-04 10:32:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:02.990391 | orchestrator | 2025-02-04 10:32:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:02.990538 | orchestrator | 2025-02-04 10:33:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:02.990622 | orchestrator | 2025-02-04 10:33:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:06.046476 | orchestrator | 2025-02-04 10:33:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:06.046602 | orchestrator | 2025-02-04 10:33:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:06.046712 | orchestrator | 2025-02-04 10:33:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:09.091255 | orchestrator | 2025-02-04 10:33:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:09.091395 | orchestrator | 2025-02-04 10:33:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:12.140833 | orchestrator | 2025-02-04 10:33:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:12.141001 | orchestrator | 2025-02-04 10:33:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:12.141123 | orchestrator | 2025-02-04 10:33:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:15.185706 | orchestrator | 2025-02-04 10:33:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:15.185876 | orchestrator | 2025-02-04 10:33:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:15.185916 | orchestrator | 2025-02-04 10:33:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:15.187167 | orchestrator | 2025-02-04 10:33:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:18.236312 | orchestrator | 2025-02-04 10:33:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:18.236464 | orchestrator | 2025-02-04 10:33:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:21.279756 | orchestrator | 2025-02-04 10:33:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:21.279904 | orchestrator | 2025-02-04 10:33:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:21.279944 | orchestrator | 2025-02-04 10:33:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:21.280374 | orchestrator | 2025-02-04 10:33:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:24.332455 | orchestrator | 2025-02-04 10:33:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:24.332603 | orchestrator | 2025-02-04 10:33:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:24.333333 | orchestrator | 2025-02-04 10:33:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:27.381702 | orchestrator | 2025-02-04 10:33:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:27.381917 | orchestrator | 2025-02-04 10:33:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:27.382101 | orchestrator | 2025-02-04 10:33:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:30.429912 | orchestrator | 2025-02-04 10:33:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:30.430223 | orchestrator | 2025-02-04 10:33:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:30.430328 | orchestrator | 2025-02-04 10:33:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:30.430644 | orchestrator | 2025-02-04 10:33:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:33.475395 | orchestrator | 2025-02-04 10:33:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:33.476769 | orchestrator | 2025-02-04 10:33:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:36.525490 | orchestrator | 2025-02-04 10:33:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:36.525633 | orchestrator | 2025-02-04 10:33:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:36.526266 | orchestrator | 2025-02-04 10:33:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:39.571015 | orchestrator | 2025-02-04 10:33:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:39.571219 | orchestrator | 2025-02-04 10:33:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:39.571523 | orchestrator | 2025-02-04 10:33:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:42.617896 | orchestrator | 2025-02-04 10:33:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:42.618828 | orchestrator | 2025-02-04 10:33:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:45.670221 | orchestrator | 2025-02-04 10:33:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:45.670350 | orchestrator | 2025-02-04 10:33:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:45.670388 | orchestrator | 2025-02-04 10:33:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:45.671162 | orchestrator | 2025-02-04 10:33:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:45.671306 | orchestrator | 2025-02-04 10:33:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:48.720771 | orchestrator | 2025-02-04 10:33:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:48.721815 | orchestrator | 2025-02-04 10:33:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:51.760307 | orchestrator | 2025-02-04 10:33:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:51.760434 | orchestrator | 2025-02-04 10:33:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:51.761318 | orchestrator | 2025-02-04 10:33:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:54.816358 | orchestrator | 2025-02-04 10:33:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:54.816493 | orchestrator | 2025-02-04 10:33:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:54.816771 | orchestrator | 2025-02-04 10:33:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:33:54.817146 | orchestrator | 2025-02-04 10:33:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:33:57.869753 | orchestrator | 2025-02-04 10:33:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:33:57.870243 | orchestrator | 2025-02-04 10:33:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:00.923550 | orchestrator | 2025-02-04 10:33:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:00.923662 | orchestrator | 2025-02-04 10:34:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:00.923881 | orchestrator | 2025-02-04 10:34:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:00.923949 | orchestrator | 2025-02-04 10:34:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:03.975519 | orchestrator | 2025-02-04 10:34:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:03.976998 | orchestrator | 2025-02-04 10:34:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:07.039507 | orchestrator | 2025-02-04 10:34:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:07.039662 | orchestrator | 2025-02-04 10:34:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:10.080972 | orchestrator | 2025-02-04 10:34:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:10.081144 | orchestrator | 2025-02-04 10:34:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:10.081203 | orchestrator | 2025-02-04 10:34:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:10.082715 | orchestrator | 2025-02-04 10:34:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:13.130880 | orchestrator | 2025-02-04 10:34:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:13.131088 | orchestrator | 2025-02-04 10:34:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:13.132988 | orchestrator | 2025-02-04 10:34:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:16.184294 | orchestrator | 2025-02-04 10:34:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:16.184430 | orchestrator | 2025-02-04 10:34:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:16.185972 | orchestrator | 2025-02-04 10:34:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:16.186177 | orchestrator | 2025-02-04 10:34:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:19.235266 | orchestrator | 2025-02-04 10:34:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:19.236206 | orchestrator | 2025-02-04 10:34:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:22.286613 | orchestrator | 2025-02-04 10:34:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:22.286790 | orchestrator | 2025-02-04 10:34:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:22.287179 | orchestrator | 2025-02-04 10:34:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:25.323669 | orchestrator | 2025-02-04 10:34:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:25.323821 | orchestrator | 2025-02-04 10:34:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:28.376348 | orchestrator | 2025-02-04 10:34:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:28.376469 | orchestrator | 2025-02-04 10:34:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:28.376510 | orchestrator | 2025-02-04 10:34:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:28.376870 | orchestrator | 2025-02-04 10:34:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:31.420738 | orchestrator | 2025-02-04 10:34:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:31.420870 | orchestrator | 2025-02-04 10:34:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:31.421797 | orchestrator | 2025-02-04 10:34:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:31.421924 | orchestrator | 2025-02-04 10:34:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:34.475843 | orchestrator | 2025-02-04 10:34:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:34.477275 | orchestrator | 2025-02-04 10:34:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:37.524446 | orchestrator | 2025-02-04 10:34:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:37.524689 | orchestrator | 2025-02-04 10:34:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:37.524808 | orchestrator | 2025-02-04 10:34:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:40.581781 | orchestrator | 2025-02-04 10:34:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:40.581922 | orchestrator | 2025-02-04 10:34:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:40.583225 | orchestrator | 2025-02-04 10:34:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:43.628146 | orchestrator | 2025-02-04 10:34:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:43.628244 | orchestrator | 2025-02-04 10:34:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:43.629087 | orchestrator | 2025-02-04 10:34:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:46.677261 | orchestrator | 2025-02-04 10:34:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:46.677376 | orchestrator | 2025-02-04 10:34:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:46.678171 | orchestrator | 2025-02-04 10:34:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:46.678348 | orchestrator | 2025-02-04 10:34:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:49.725696 | orchestrator | 2025-02-04 10:34:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:49.726439 | orchestrator | 2025-02-04 10:34:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:49.726533 | orchestrator | 2025-02-04 10:34:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:52.779825 | orchestrator | 2025-02-04 10:34:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:52.780498 | orchestrator | 2025-02-04 10:34:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:55.834368 | orchestrator | 2025-02-04 10:34:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:55.834560 | orchestrator | 2025-02-04 10:34:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:55.835187 | orchestrator | 2025-02-04 10:34:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:58.881667 | orchestrator | 2025-02-04 10:34:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:34:58.881809 | orchestrator | 2025-02-04 10:34:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:34:58.882606 | orchestrator | 2025-02-04 10:34:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:34:58.882687 | orchestrator | 2025-02-04 10:34:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:01.936884 | orchestrator | 2025-02-04 10:35:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:04.986660 | orchestrator | 2025-02-04 10:35:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:04.986758 | orchestrator | 2025-02-04 10:35:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:04.986783 | orchestrator | 2025-02-04 10:35:04 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:04.987893 | orchestrator | 2025-02-04 10:35:04 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:08.036642 | orchestrator | 2025-02-04 10:35:04 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:08.036789 | orchestrator | 2025-02-04 10:35:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:08.037789 | orchestrator | 2025-02-04 10:35:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:11.076646 | orchestrator | 2025-02-04 10:35:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:11.076741 | orchestrator | 2025-02-04 10:35:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:11.078623 | orchestrator | 2025-02-04 10:35:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:14.134968 | orchestrator | 2025-02-04 10:35:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:14.135176 | orchestrator | 2025-02-04 10:35:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:14.136001 | orchestrator | 2025-02-04 10:35:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:17.191622 | orchestrator | 2025-02-04 10:35:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:17.191763 | orchestrator | 2025-02-04 10:35:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:17.192902 | orchestrator | 2025-02-04 10:35:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:17.193379 | orchestrator | 2025-02-04 10:35:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:20.242562 | orchestrator | 2025-02-04 10:35:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:20.243513 | orchestrator | 2025-02-04 10:35:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:23.305525 | orchestrator | 2025-02-04 10:35:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:23.305691 | orchestrator | 2025-02-04 10:35:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:23.307470 | orchestrator | 2025-02-04 10:35:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:26.354388 | orchestrator | 2025-02-04 10:35:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:26.354547 | orchestrator | 2025-02-04 10:35:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:29.402988 | orchestrator | 2025-02-04 10:35:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:29.403287 | orchestrator | 2025-02-04 10:35:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:29.403331 | orchestrator | 2025-02-04 10:35:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:29.404420 | orchestrator | 2025-02-04 10:35:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:29.404627 | orchestrator | 2025-02-04 10:35:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:32.452980 | orchestrator | 2025-02-04 10:35:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:32.453368 | orchestrator | 2025-02-04 10:35:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:32.453573 | orchestrator | 2025-02-04 10:35:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:35.504727 | orchestrator | 2025-02-04 10:35:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:35.505007 | orchestrator | 2025-02-04 10:35:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:38.555014 | orchestrator | 2025-02-04 10:35:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:38.555275 | orchestrator | 2025-02-04 10:35:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:38.555378 | orchestrator | 2025-02-04 10:35:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:41.606623 | orchestrator | 2025-02-04 10:35:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:41.606801 | orchestrator | 2025-02-04 10:35:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:41.606912 | orchestrator | 2025-02-04 10:35:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:44.655656 | orchestrator | 2025-02-04 10:35:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:44.655839 | orchestrator | 2025-02-04 10:35:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:44.656606 | orchestrator | 2025-02-04 10:35:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:47.706416 | orchestrator | 2025-02-04 10:35:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:47.706552 | orchestrator | 2025-02-04 10:35:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:47.707311 | orchestrator | 2025-02-04 10:35:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:50.748693 | orchestrator | 2025-02-04 10:35:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:50.748825 | orchestrator | 2025-02-04 10:35:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:50.749633 | orchestrator | 2025-02-04 10:35:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:50.749716 | orchestrator | 2025-02-04 10:35:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:53.798419 | orchestrator | 2025-02-04 10:35:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:53.798828 | orchestrator | 2025-02-04 10:35:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:56.850134 | orchestrator | 2025-02-04 10:35:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:56.850260 | orchestrator | 2025-02-04 10:35:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:56.853241 | orchestrator | 2025-02-04 10:35:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:35:59.906726 | orchestrator | 2025-02-04 10:35:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:35:59.906875 | orchestrator | 2025-02-04 10:35:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:35:59.908175 | orchestrator | 2025-02-04 10:35:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:02.959594 | orchestrator | 2025-02-04 10:35:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:02.959739 | orchestrator | 2025-02-04 10:36:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:02.960301 | orchestrator | 2025-02-04 10:36:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:06.008634 | orchestrator | 2025-02-04 10:36:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:06.008800 | orchestrator | 2025-02-04 10:36:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:06.009136 | orchestrator | 2025-02-04 10:36:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:09.057452 | orchestrator | 2025-02-04 10:36:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:09.057592 | orchestrator | 2025-02-04 10:36:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:09.059114 | orchestrator | 2025-02-04 10:36:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:12.116102 | orchestrator | 2025-02-04 10:36:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:12.116238 | orchestrator | 2025-02-04 10:36:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:12.117162 | orchestrator | 2025-02-04 10:36:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:15.166142 | orchestrator | 2025-02-04 10:36:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:15.166293 | orchestrator | 2025-02-04 10:36:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:15.167189 | orchestrator | 2025-02-04 10:36:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:18.213389 | orchestrator | 2025-02-04 10:36:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:18.213535 | orchestrator | 2025-02-04 10:36:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:18.213703 | orchestrator | 2025-02-04 10:36:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:21.262432 | orchestrator | 2025-02-04 10:36:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:21.262556 | orchestrator | 2025-02-04 10:36:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:21.262969 | orchestrator | 2025-02-04 10:36:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:21.263060 | orchestrator | 2025-02-04 10:36:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:24.314957 | orchestrator | 2025-02-04 10:36:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:24.315285 | orchestrator | 2025-02-04 10:36:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:27.366220 | orchestrator | 2025-02-04 10:36:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:27.366333 | orchestrator | 2025-02-04 10:36:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:27.366883 | orchestrator | 2025-02-04 10:36:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:30.420463 | orchestrator | 2025-02-04 10:36:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:30.420627 | orchestrator | 2025-02-04 10:36:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:30.421388 | orchestrator | 2025-02-04 10:36:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:33.477159 | orchestrator | 2025-02-04 10:36:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:33.477363 | orchestrator | 2025-02-04 10:36:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:33.478774 | orchestrator | 2025-02-04 10:36:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:36.524563 | orchestrator | 2025-02-04 10:36:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:36.524660 | orchestrator | 2025-02-04 10:36:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:36.525833 | orchestrator | 2025-02-04 10:36:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:39.575721 | orchestrator | 2025-02-04 10:36:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:39.575850 | orchestrator | 2025-02-04 10:36:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:39.576990 | orchestrator | 2025-02-04 10:36:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:42.631399 | orchestrator | 2025-02-04 10:36:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:42.631546 | orchestrator | 2025-02-04 10:36:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:45.678836 | orchestrator | 2025-02-04 10:36:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:45.678971 | orchestrator | 2025-02-04 10:36:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:45.679089 | orchestrator | 2025-02-04 10:36:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:45.679727 | orchestrator | 2025-02-04 10:36:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:48.727758 | orchestrator | 2025-02-04 10:36:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:48.727889 | orchestrator | 2025-02-04 10:36:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:48.729364 | orchestrator | 2025-02-04 10:36:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:51.781102 | orchestrator | 2025-02-04 10:36:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:51.781252 | orchestrator | 2025-02-04 10:36:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:51.781488 | orchestrator | 2025-02-04 10:36:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:54.828838 | orchestrator | 2025-02-04 10:36:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:54.828985 | orchestrator | 2025-02-04 10:36:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:54.831189 | orchestrator | 2025-02-04 10:36:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:36:57.877985 | orchestrator | 2025-02-04 10:36:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:36:57.878199 | orchestrator | 2025-02-04 10:36:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:36:57.878791 | orchestrator | 2025-02-04 10:36:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:00.928812 | orchestrator | 2025-02-04 10:36:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:00.928979 | orchestrator | 2025-02-04 10:37:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:03.974658 | orchestrator | 2025-02-04 10:37:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:03.974783 | orchestrator | 2025-02-04 10:37:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:03.974824 | orchestrator | 2025-02-04 10:37:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:07.021318 | orchestrator | 2025-02-04 10:37:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:07.021431 | orchestrator | 2025-02-04 10:37:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:07.021457 | orchestrator | 2025-02-04 10:37:07 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:07.021659 | orchestrator | 2025-02-04 10:37:07 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:10.073267 | orchestrator | 2025-02-04 10:37:07 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:10.073410 | orchestrator | 2025-02-04 10:37:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:10.075220 | orchestrator | 2025-02-04 10:37:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:10.075328 | orchestrator | 2025-02-04 10:37:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:13.134821 | orchestrator | 2025-02-04 10:37:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:13.135671 | orchestrator | 2025-02-04 10:37:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:16.189643 | orchestrator | 2025-02-04 10:37:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:16.189781 | orchestrator | 2025-02-04 10:37:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:16.191270 | orchestrator | 2025-02-04 10:37:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:19.242083 | orchestrator | 2025-02-04 10:37:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:19.242293 | orchestrator | 2025-02-04 10:37:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:19.242732 | orchestrator | 2025-02-04 10:37:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:22.286504 | orchestrator | 2025-02-04 10:37:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:22.286676 | orchestrator | 2025-02-04 10:37:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:22.287283 | orchestrator | 2025-02-04 10:37:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:22.287603 | orchestrator | 2025-02-04 10:37:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:25.332151 | orchestrator | 2025-02-04 10:37:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:25.333531 | orchestrator | 2025-02-04 10:37:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:28.381848 | orchestrator | 2025-02-04 10:37:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:28.381989 | orchestrator | 2025-02-04 10:37:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:28.382888 | orchestrator | 2025-02-04 10:37:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:28.382971 | orchestrator | 2025-02-04 10:37:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:31.436110 | orchestrator | 2025-02-04 10:37:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:31.436685 | orchestrator | 2025-02-04 10:37:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:34.481702 | orchestrator | 2025-02-04 10:37:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:34.481876 | orchestrator | 2025-02-04 10:37:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:34.482168 | orchestrator | 2025-02-04 10:37:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:37.536364 | orchestrator | 2025-02-04 10:37:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:37.536563 | orchestrator | 2025-02-04 10:37:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:37.537289 | orchestrator | 2025-02-04 10:37:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:40.588515 | orchestrator | 2025-02-04 10:37:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:40.588669 | orchestrator | 2025-02-04 10:37:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:40.589391 | orchestrator | 2025-02-04 10:37:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:43.641826 | orchestrator | 2025-02-04 10:37:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:43.642120 | orchestrator | 2025-02-04 10:37:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:43.642533 | orchestrator | 2025-02-04 10:37:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:46.692558 | orchestrator | 2025-02-04 10:37:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:46.692783 | orchestrator | 2025-02-04 10:37:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:46.692883 | orchestrator | 2025-02-04 10:37:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:49.746761 | orchestrator | 2025-02-04 10:37:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:49.746946 | orchestrator | 2025-02-04 10:37:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:49.747842 | orchestrator | 2025-02-04 10:37:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:49.747917 | orchestrator | 2025-02-04 10:37:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:52.797083 | orchestrator | 2025-02-04 10:37:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:52.797862 | orchestrator | 2025-02-04 10:37:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:55.849924 | orchestrator | 2025-02-04 10:37:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:55.850294 | orchestrator | 2025-02-04 10:37:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:55.850409 | orchestrator | 2025-02-04 10:37:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:37:55.850548 | orchestrator | 2025-02-04 10:37:55 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:37:58.902870 | orchestrator | 2025-02-04 10:37:58 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:37:58.904495 | orchestrator | 2025-02-04 10:37:58 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:01.952160 | orchestrator | 2025-02-04 10:37:58 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:01.952302 | orchestrator | 2025-02-04 10:38:01 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:01.953984 | orchestrator | 2025-02-04 10:38:01 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:05.007922 | orchestrator | 2025-02-04 10:38:01 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:05.008082 | orchestrator | 2025-02-04 10:38:05 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:05.011699 | orchestrator | 2025-02-04 10:38:05 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:08.059197 | orchestrator | 2025-02-04 10:38:05 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:08.059321 | orchestrator | 2025-02-04 10:38:08 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:08.060538 | orchestrator | 2025-02-04 10:38:08 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:11.113646 | orchestrator | 2025-02-04 10:38:08 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:11.113784 | orchestrator | 2025-02-04 10:38:11 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:11.114915 | orchestrator | 2025-02-04 10:38:11 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:14.154379 | orchestrator | 2025-02-04 10:38:11 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:14.154526 | orchestrator | 2025-02-04 10:38:14 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:17.201883 | orchestrator | 2025-02-04 10:38:14 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:17.202178 | orchestrator | 2025-02-04 10:38:14 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:17.202249 | orchestrator | 2025-02-04 10:38:17 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:17.202395 | orchestrator | 2025-02-04 10:38:17 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:20.257205 | orchestrator | 2025-02-04 10:38:17 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:20.257357 | orchestrator | 2025-02-04 10:38:20 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:20.258191 | orchestrator | 2025-02-04 10:38:20 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:20.258814 | orchestrator | 2025-02-04 10:38:20 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:23.307146 | orchestrator | 2025-02-04 10:38:23 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:23.307709 | orchestrator | 2025-02-04 10:38:23 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:23.307813 | orchestrator | 2025-02-04 10:38:23 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:26.354937 | orchestrator | 2025-02-04 10:38:26 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:26.355879 | orchestrator | 2025-02-04 10:38:26 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:26.356439 | orchestrator | 2025-02-04 10:38:26 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:29.406192 | orchestrator | 2025-02-04 10:38:29 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:29.407154 | orchestrator | 2025-02-04 10:38:29 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:29.407289 | orchestrator | 2025-02-04 10:38:29 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:32.454289 | orchestrator | 2025-02-04 10:38:32 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:32.456481 | orchestrator | 2025-02-04 10:38:32 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:32.456942 | orchestrator | 2025-02-04 10:38:32 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:35.511049 | orchestrator | 2025-02-04 10:38:35 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:35.512331 | orchestrator | 2025-02-04 10:38:35 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:38.562520 | orchestrator | 2025-02-04 10:38:35 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:38.562691 | orchestrator | 2025-02-04 10:38:38 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:38.562815 | orchestrator | 2025-02-04 10:38:38 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:41.614609 | orchestrator | 2025-02-04 10:38:38 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:41.614809 | orchestrator | 2025-02-04 10:38:41 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:41.615370 | orchestrator | 2025-02-04 10:38:41 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:44.657797 | orchestrator | 2025-02-04 10:38:41 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:44.657947 | orchestrator | 2025-02-04 10:38:44 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:44.658666 | orchestrator | 2025-02-04 10:38:44 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:44.658835 | orchestrator | 2025-02-04 10:38:44 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:47.705567 | orchestrator | 2025-02-04 10:38:47 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:47.705741 | orchestrator | 2025-02-04 10:38:47 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:50.748243 | orchestrator | 2025-02-04 10:38:47 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:50.748394 | orchestrator | 2025-02-04 10:38:50 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:50.748982 | orchestrator | 2025-02-04 10:38:50 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:53.802828 | orchestrator | 2025-02-04 10:38:50 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:53.802970 | orchestrator | 2025-02-04 10:38:53 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:53.803751 | orchestrator | 2025-02-04 10:38:53 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:56.852549 | orchestrator | 2025-02-04 10:38:53 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:56.852661 | orchestrator | 2025-02-04 10:38:56 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:56.853053 | orchestrator | 2025-02-04 10:38:56 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:38:59.905396 | orchestrator | 2025-02-04 10:38:56 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:38:59.905524 | orchestrator | 2025-02-04 10:38:59 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:38:59.907234 | orchestrator | 2025-02-04 10:38:59 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:02.957650 | orchestrator | 2025-02-04 10:38:59 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:02.957791 | orchestrator | 2025-02-04 10:39:02 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:06.012324 | orchestrator | 2025-02-04 10:39:02 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:06.012436 | orchestrator | 2025-02-04 10:39:02 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:06.012467 | orchestrator | 2025-02-04 10:39:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:06.012653 | orchestrator | 2025-02-04 10:39:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:09.059584 | orchestrator | 2025-02-04 10:39:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:09.059785 | orchestrator | 2025-02-04 10:39:09 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:09.060714 | orchestrator | 2025-02-04 10:39:09 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:12.105070 | orchestrator | 2025-02-04 10:39:09 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:12.105165 | orchestrator | 2025-02-04 10:39:12 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:15.143048 | orchestrator | 2025-02-04 10:39:12 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:15.143207 | orchestrator | 2025-02-04 10:39:12 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:15.143260 | orchestrator | 2025-02-04 10:39:15 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:15.145817 | orchestrator | 2025-02-04 10:39:15 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:18.190266 | orchestrator | 2025-02-04 10:39:15 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:18.190396 | orchestrator | 2025-02-04 10:39:18 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:18.190487 | orchestrator | 2025-02-04 10:39:18 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:21.236739 | orchestrator | 2025-02-04 10:39:18 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:21.236906 | orchestrator | 2025-02-04 10:39:21 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:21.237491 | orchestrator | 2025-02-04 10:39:21 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:24.294209 | orchestrator | 2025-02-04 10:39:21 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:24.294343 | orchestrator | 2025-02-04 10:39:24 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:24.294749 | orchestrator | 2025-02-04 10:39:24 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:27.339109 | orchestrator | 2025-02-04 10:39:24 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:27.339295 | orchestrator | 2025-02-04 10:39:27 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:27.340599 | orchestrator | 2025-02-04 10:39:27 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:30.379539 | orchestrator | 2025-02-04 10:39:27 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:30.379684 | orchestrator | 2025-02-04 10:39:30 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:33.428079 | orchestrator | 2025-02-04 10:39:30 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:33.428208 | orchestrator | 2025-02-04 10:39:30 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:33.428248 | orchestrator | 2025-02-04 10:39:33 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:36.477029 | orchestrator | 2025-02-04 10:39:33 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:36.477156 | orchestrator | 2025-02-04 10:39:33 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:36.477194 | orchestrator | 2025-02-04 10:39:36 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:36.477285 | orchestrator | 2025-02-04 10:39:36 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:39.528745 | orchestrator | 2025-02-04 10:39:36 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:39.528907 | orchestrator | 2025-02-04 10:39:39 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:39.530378 | orchestrator | 2025-02-04 10:39:39 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:42.580128 | orchestrator | 2025-02-04 10:39:39 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:42.580274 | orchestrator | 2025-02-04 10:39:42 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:42.580693 | orchestrator | 2025-02-04 10:39:42 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:45.628529 | orchestrator | 2025-02-04 10:39:42 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:45.628690 | orchestrator | 2025-02-04 10:39:45 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:45.630512 | orchestrator | 2025-02-04 10:39:45 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:48.679759 | orchestrator | 2025-02-04 10:39:45 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:48.679903 | orchestrator | 2025-02-04 10:39:48 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:48.680916 | orchestrator | 2025-02-04 10:39:48 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:51.722433 | orchestrator | 2025-02-04 10:39:48 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:51.722582 | orchestrator | 2025-02-04 10:39:51 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:51.722830 | orchestrator | 2025-02-04 10:39:51 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:54.768154 | orchestrator | 2025-02-04 10:39:51 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:54.768294 | orchestrator | 2025-02-04 10:39:54 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:54.769597 | orchestrator | 2025-02-04 10:39:54 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:57.820903 | orchestrator | 2025-02-04 10:39:54 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:39:57.821107 | orchestrator | 2025-02-04 10:39:57 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:39:57.821303 | orchestrator | 2025-02-04 10:39:57 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:39:57.821375 | orchestrator | 2025-02-04 10:39:57 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:00.868818 | orchestrator | 2025-02-04 10:40:00 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:00.869939 | orchestrator | 2025-02-04 10:40:00 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:03.921010 | orchestrator | 2025-02-04 10:40:00 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:03.921107 | orchestrator | 2025-02-04 10:40:03 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:03.921149 | orchestrator | 2025-02-04 10:40:03 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:03.921160 | orchestrator | 2025-02-04 10:40:03 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:06.966072 | orchestrator | 2025-02-04 10:40:06 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:10.018288 | orchestrator | 2025-02-04 10:40:06 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:10.018527 | orchestrator | 2025-02-04 10:40:06 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:10.018581 | orchestrator | 2025-02-04 10:40:10 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:10.018675 | orchestrator | 2025-02-04 10:40:10 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:13.061577 | orchestrator | 2025-02-04 10:40:10 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:13.061760 | orchestrator | 2025-02-04 10:40:13 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:13.061853 | orchestrator | 2025-02-04 10:40:13 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:16.108575 | orchestrator | 2025-02-04 10:40:13 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:16.108708 | orchestrator | 2025-02-04 10:40:16 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:16.109551 | orchestrator | 2025-02-04 10:40:16 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:19.160465 | orchestrator | 2025-02-04 10:40:16 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:19.160654 | orchestrator | 2025-02-04 10:40:19 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:19.164069 | orchestrator | 2025-02-04 10:40:19 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:19.165319 | orchestrator | 2025-02-04 10:40:19 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:22.218386 | orchestrator | 2025-02-04 10:40:22 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:25.272165 | orchestrator | 2025-02-04 10:40:22 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:25.272287 | orchestrator | 2025-02-04 10:40:22 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:25.272326 | orchestrator | 2025-02-04 10:40:25 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:25.273220 | orchestrator | 2025-02-04 10:40:25 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:28.321321 | orchestrator | 2025-02-04 10:40:25 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:28.321503 | orchestrator | 2025-02-04 10:40:28 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:28.321695 | orchestrator | 2025-02-04 10:40:28 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:31.365055 | orchestrator | 2025-02-04 10:40:28 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:31.365173 | orchestrator | 2025-02-04 10:40:31 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:31.366408 | orchestrator | 2025-02-04 10:40:31 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:34.408445 | orchestrator | 2025-02-04 10:40:31 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:34.408600 | orchestrator | 2025-02-04 10:40:34 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:34.408643 | orchestrator | 2025-02-04 10:40:34 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:37.454401 | orchestrator | 2025-02-04 10:40:34 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:37.454544 | orchestrator | 2025-02-04 10:40:37 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:37.454686 | orchestrator | 2025-02-04 10:40:37 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:40.503969 | orchestrator | 2025-02-04 10:40:37 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:40.504091 | orchestrator | 2025-02-04 10:40:40 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:43.560339 | orchestrator | 2025-02-04 10:40:40 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:43.560455 | orchestrator | 2025-02-04 10:40:40 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:43.560494 | orchestrator | 2025-02-04 10:40:43 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:43.560826 | orchestrator | 2025-02-04 10:40:43 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:46.610384 | orchestrator | 2025-02-04 10:40:43 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:46.610534 | orchestrator | 2025-02-04 10:40:46 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:46.611218 | orchestrator | 2025-02-04 10:40:46 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:49.657335 | orchestrator | 2025-02-04 10:40:46 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:49.657523 | orchestrator | 2025-02-04 10:40:49 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:49.657622 | orchestrator | 2025-02-04 10:40:49 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:52.710278 | orchestrator | 2025-02-04 10:40:49 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:52.710432 | orchestrator | 2025-02-04 10:40:52 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:52.710842 | orchestrator | 2025-02-04 10:40:52 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:55.759748 | orchestrator | 2025-02-04 10:40:52 | INFO  | Wait 1 second(s) until the next check 2025-02-04 10:40:55.759881 | orchestrator | 2025-02-04 10:40:55 | INFO  | Task 7e541f1e-d12b-4499-a990-004ec22bccd6 is in state STARTED 2025-02-04 10:40:55.761237 | orchestrator | 2025-02-04 10:40:55 | INFO  | Task 0fe9ee47-86c4-4b01-afb8-83e483a6870c is in state STARTED 2025-02-04 10:40:58.553911 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-02-04 10:40:58.558990 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-04 10:40:59.285463 | 2025-02-04 10:40:59.285640 | PLAY [Post output play] 2025-02-04 10:40:59.315789 | 2025-02-04 10:40:59.315943 | LOOP [stage-output : Register sources] 2025-02-04 10:40:59.399268 | 2025-02-04 10:40:59.399529 | TASK [stage-output : Check sudo] 2025-02-04 10:41:00.140326 | orchestrator | sudo: a password is required 2025-02-04 10:41:00.444288 | orchestrator | ok: Runtime: 0:00:00.012049 2025-02-04 10:41:00.462810 | 2025-02-04 10:41:00.462967 | LOOP [stage-output : Set source and destination for files and folders] 2025-02-04 10:41:00.508169 | 2025-02-04 10:41:00.508545 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-02-04 10:41:00.612815 | orchestrator | ok 2025-02-04 10:41:00.621273 | 2025-02-04 10:41:00.621396 | LOOP [stage-output : Ensure target folders exist] 2025-02-04 10:41:01.131529 | orchestrator | ok: "docs" 2025-02-04 10:41:01.131885 | 2025-02-04 10:41:01.366661 | orchestrator | ok: "artifacts" 2025-02-04 10:41:01.625842 | orchestrator | ok: "logs" 2025-02-04 10:41:01.648443 | 2025-02-04 10:41:01.648635 | LOOP [stage-output : Copy files and folders to staging folder] 2025-02-04 10:41:01.691956 | 2025-02-04 10:41:01.692232 | TASK [stage-output : Make all log files readable] 2025-02-04 10:41:02.008018 | orchestrator | ok 2025-02-04 10:41:02.018338 | 2025-02-04 10:41:02.018465 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-02-04 10:41:02.066040 | orchestrator | skipping: Conditional result was False 2025-02-04 10:41:02.083123 | 2025-02-04 10:41:02.083297 | TASK [stage-output : Discover log files for compression] 2025-02-04 10:41:02.109809 | orchestrator | skipping: Conditional result was False 2025-02-04 10:41:02.131357 | 2025-02-04 10:41:02.131607 | LOOP [stage-output : Archive everything from logs] 2025-02-04 10:41:02.214099 | 2025-02-04 10:41:02.214267 | PLAY [Post cleanup play] 2025-02-04 10:41:02.238296 | 2025-02-04 10:41:02.238403 | TASK [Set cloud fact (Zuul deployment)] 2025-02-04 10:41:02.309345 | orchestrator | ok 2025-02-04 10:41:02.319412 | 2025-02-04 10:41:02.319528 | TASK [Set cloud fact (local deployment)] 2025-02-04 10:41:02.364901 | orchestrator | skipping: Conditional result was False 2025-02-04 10:41:02.383784 | 2025-02-04 10:41:02.383919 | TASK [Clean the cloud environment] 2025-02-04 10:41:03.269446 | orchestrator | 2025-02-04 10:41:03 - clean up servers 2025-02-04 10:41:04.158305 | orchestrator | 2025-02-04 10:41:04 - testbed-manager 2025-02-04 10:41:04.241763 | orchestrator | 2025-02-04 10:41:04 - testbed-node-0 2025-02-04 10:41:04.330386 | orchestrator | 2025-02-04 10:41:04 - testbed-node-2 2025-02-04 10:41:04.418277 | orchestrator | 2025-02-04 10:41:04 - testbed-node-4 2025-02-04 10:41:04.513915 | orchestrator | 2025-02-04 10:41:04 - testbed-node-3 2025-02-04 10:41:04.604617 | orchestrator | 2025-02-04 10:41:04 - testbed-node-1 2025-02-04 10:41:04.697902 | orchestrator | 2025-02-04 10:41:04 - testbed-node-5 2025-02-04 10:41:04.797883 | orchestrator | 2025-02-04 10:41:04 - clean up keypairs 2025-02-04 10:41:04.816816 | orchestrator | 2025-02-04 10:41:04 - testbed 2025-02-04 10:41:04.842284 | orchestrator | 2025-02-04 10:41:04 - wait for servers to be gone 2025-02-04 10:41:16.132390 | orchestrator | 2025-02-04 10:41:16 - clean up ports 2025-02-04 10:41:16.381249 | orchestrator | 2025-02-04 10:41:16 - 105cecd5-bfc9-4dd1-9aa5-560a60f67b4a 2025-02-04 10:41:16.590475 | orchestrator | 2025-02-04 10:41:16 - 196f9997-894f-4daf-8c85-e47953674213 2025-02-04 10:41:17.624096 | orchestrator | 2025-02-04 10:41:17 - 2f2c947f-242b-489f-b287-bfcae87a8bf6 2025-02-04 10:41:17.824487 | orchestrator | 2025-02-04 10:41:17 - 497bdf92-437f-4ada-bcba-fbc7e0e08f96 2025-02-04 10:41:18.019623 | orchestrator | 2025-02-04 10:41:18 - 7f8daea3-bc26-4f0a-be75-79a4d2492443 2025-02-04 10:41:18.222323 | orchestrator | 2025-02-04 10:41:18 - 8735a19d-4ff1-477f-a163-14693db6c3ab 2025-02-04 10:41:18.422497 | orchestrator | 2025-02-04 10:41:18 - bc740388-1657-43a4-b732-c932b6894a34 2025-02-04 10:41:18.766256 | orchestrator | 2025-02-04 10:41:18 - clean up volumes 2025-02-04 10:41:18.927697 | orchestrator | 2025-02-04 10:41:18 - testbed-volume-5-node-base 2025-02-04 10:41:18.971465 | orchestrator | 2025-02-04 10:41:18 - testbed-volume-4-node-base 2025-02-04 10:41:19.016797 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-3-node-base 2025-02-04 10:41:19.060445 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-0-node-base 2025-02-04 10:41:19.108828 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-2-node-base 2025-02-04 10:41:19.151634 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-1-node-base 2025-02-04 10:41:19.192953 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-manager-base 2025-02-04 10:41:19.237284 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-4-node-4 2025-02-04 10:41:19.282439 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-10-node-4 2025-02-04 10:41:19.328510 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-8-node-2 2025-02-04 10:41:19.370210 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-16-node-4 2025-02-04 10:41:19.415306 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-3-node-3 2025-02-04 10:41:19.457963 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-11-node-5 2025-02-04 10:41:19.502190 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-1-node-1 2025-02-04 10:41:19.552161 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-6-node-0 2025-02-04 10:41:19.598741 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-15-node-3 2025-02-04 10:41:19.648376 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-17-node-5 2025-02-04 10:41:19.693579 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-2-node-2 2025-02-04 10:41:19.748814 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-0-node-0 2025-02-04 10:41:19.795923 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-9-node-3 2025-02-04 10:41:19.841144 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-5-node-5 2025-02-04 10:41:19.882156 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-7-node-1 2025-02-04 10:41:19.923173 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-13-node-1 2025-02-04 10:41:19.971869 | orchestrator | 2025-02-04 10:41:19 - testbed-volume-14-node-2 2025-02-04 10:41:20.020645 | orchestrator | 2025-02-04 10:41:20 - testbed-volume-12-node-0 2025-02-04 10:41:20.073066 | orchestrator | 2025-02-04 10:41:20 - disconnect routers 2025-02-04 10:41:20.125957 | orchestrator | 2025-02-04 10:41:20 - testbed 2025-02-04 10:41:20.805467 | orchestrator | 2025-02-04 10:41:20 - clean up subnets 2025-02-04 10:41:20.901142 | orchestrator | 2025-02-04 10:41:20 - subnet-testbed-management 2025-02-04 10:41:20.961067 | orchestrator | 2025-02-04 10:41:20 - clean up networks 2025-02-04 10:41:21.125759 | orchestrator | 2025-02-04 10:41:21 - net-testbed-management 2025-02-04 10:41:21.388962 | orchestrator | 2025-02-04 10:41:21 - clean up security groups 2025-02-04 10:41:21.425411 | orchestrator | 2025-02-04 10:41:21 - testbed-node 2025-02-04 10:41:21.527213 | orchestrator | 2025-02-04 10:41:21 - testbed-management 2025-02-04 10:41:21.615830 | orchestrator | 2025-02-04 10:41:21 - clean up floating ips 2025-02-04 10:41:21.649426 | orchestrator | 2025-02-04 10:41:21 - 81.163.193.89 2025-02-04 10:41:22.108405 | orchestrator | 2025-02-04 10:41:22 - clean up routers 2025-02-04 10:41:22.159552 | orchestrator | 2025-02-04 10:41:22 - testbed 2025-02-04 10:41:22.945321 | orchestrator | changed 2025-02-04 10:41:22.993643 | 2025-02-04 10:41:22.993756 | PLAY RECAP 2025-02-04 10:41:22.993814 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-02-04 10:41:22.993841 | 2025-02-04 10:41:23.111915 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-02-04 10:41:23.117996 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-04 10:41:23.886039 | 2025-02-04 10:41:23.886223 | PLAY [Base post-fetch] 2025-02-04 10:41:23.917311 | 2025-02-04 10:41:23.917469 | TASK [fetch-output : Set log path for multiple nodes] 2025-02-04 10:41:23.973613 | orchestrator | skipping: Conditional result was False 2025-02-04 10:41:23.982445 | 2025-02-04 10:41:23.982604 | TASK [fetch-output : Set log path for single node] 2025-02-04 10:41:24.048023 | orchestrator | ok 2025-02-04 10:41:24.056316 | 2025-02-04 10:41:24.056484 | LOOP [fetch-output : Ensure local output dirs] 2025-02-04 10:41:24.563601 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/work/logs" 2025-02-04 10:41:24.843713 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/work/artifacts" 2025-02-04 10:41:25.154904 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6d1a90cbcc2642bb8f983473e166609b/work/docs" 2025-02-04 10:41:25.168162 | 2025-02-04 10:41:25.168345 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-02-04 10:41:26.007695 | orchestrator | changed: .d..t...... ./ 2025-02-04 10:41:26.007972 | orchestrator | changed: All items complete 2025-02-04 10:41:26.008013 | 2025-02-04 10:41:26.613342 | orchestrator | changed: .d..t...... ./ 2025-02-04 10:41:27.290649 | orchestrator | changed: .d..t...... ./ 2025-02-04 10:41:27.326127 | 2025-02-04 10:41:27.326317 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-02-04 10:41:27.377777 | orchestrator | skipping: Conditional result was False 2025-02-04 10:41:27.384912 | orchestrator | skipping: Conditional result was False 2025-02-04 10:41:27.440120 | 2025-02-04 10:41:27.440233 | PLAY RECAP 2025-02-04 10:41:27.440289 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-02-04 10:41:27.440317 | 2025-02-04 10:41:27.558710 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-02-04 10:41:27.566945 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-04 10:41:28.289636 | 2025-02-04 10:41:28.289802 | PLAY [Base post] 2025-02-04 10:41:28.319497 | 2025-02-04 10:41:28.319629 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-02-04 10:41:29.176987 | orchestrator | changed 2025-02-04 10:41:29.217161 | 2025-02-04 10:41:29.217323 | PLAY RECAP 2025-02-04 10:41:29.217392 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-02-04 10:41:29.217458 | 2025-02-04 10:41:29.345163 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-02-04 10:41:29.353431 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-02-04 10:41:30.131209 | 2025-02-04 10:41:30.131378 | PLAY [Base post-logs] 2025-02-04 10:41:30.148025 | 2025-02-04 10:41:30.148156 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-02-04 10:41:30.625961 | localhost | changed 2025-02-04 10:41:30.630137 | 2025-02-04 10:41:30.630291 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-02-04 10:41:30.669452 | localhost | ok 2025-02-04 10:41:30.678115 | 2025-02-04 10:41:30.678299 | TASK [Set zuul-log-path fact] 2025-02-04 10:41:30.706902 | localhost | ok 2025-02-04 10:41:30.722793 | 2025-02-04 10:41:30.722924 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-04 10:41:30.768100 | localhost | skipping: Conditional result was False 2025-02-04 10:41:30.775681 | 2025-02-04 10:41:30.775875 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-04 10:41:30.815717 | localhost | ok 2025-02-04 10:41:30.818902 | 2025-02-04 10:41:30.819005 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-04 10:41:30.864201 | localhost | skipping: Conditional result was False 2025-02-04 10:41:30.867675 | 2025-02-04 10:41:30.867782 | TASK [set-zuul-log-path-fact : Set log path for a change] 2025-02-04 10:41:30.892215 | localhost | skipping: Conditional result was False 2025-02-04 10:41:30.895701 | 2025-02-04 10:41:30.895812 | TASK [set-zuul-log-path-fact : Set log path for a ref update] 2025-02-04 10:41:30.920032 | localhost | skipping: Conditional result was False 2025-02-04 10:41:30.923683 | 2025-02-04 10:41:30.923788 | TASK [set-zuul-log-path-fact : Set log path for a periodic job] 2025-02-04 10:41:30.950352 | localhost | skipping: Conditional result was False 2025-02-04 10:41:30.956494 | 2025-02-04 10:41:30.956607 | TASK [upload-logs : Create log directories] 2025-02-04 10:41:31.489980 | localhost | changed 2025-02-04 10:41:31.497730 | 2025-02-04 10:41:31.497882 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-02-04 10:41:32.046436 | localhost -> localhost | ok: Runtime: 0:00:00.007041 2025-02-04 10:41:32.051768 | 2025-02-04 10:41:32.051879 | TASK [upload-logs : Upload logs to log server] 2025-02-04 10:41:32.658321 | localhost | Output suppressed because no_log was given 2025-02-04 10:41:32.664006 | 2025-02-04 10:41:32.664171 | LOOP [upload-logs : Compress console log and json output] 2025-02-04 10:41:32.737605 | localhost | skipping: Conditional result was False 2025-02-04 10:41:32.757458 | localhost | skipping: Conditional result was False 2025-02-04 10:41:32.772811 | 2025-02-04 10:41:32.773024 | LOOP [upload-logs : Upload compressed console log and json output] 2025-02-04 10:41:32.846912 | localhost | skipping: Conditional result was False 2025-02-04 10:41:32.847644 | 2025-02-04 10:41:32.870209 | localhost | skipping: Conditional result was False 2025-02-04 10:41:32.881362 | 2025-02-04 10:41:32.881553 | LOOP [upload-logs : Upload console log and json output]